Re: [BackupPC-users] dump failed: can't find Compress::Zlib

2008-09-03 Thread Nils Breunese (Lemonbit)
nadia kheffache wrote:

> I have installed backuppc 3 on centos5 (Redhat distribution), i  
> backup successfully, today i have error when i tart backup manualy:  
> dump failed: can't find Compress::Zlib
>
> i do cpan>install Compress::Zlib, but i have other dependences, when  
> i install any module of perl, it nead another dependences!
>
> Warning: prerequisite Compress::Raw::Zlib 2.014 not found. We have  
> 2.012.
> Warning: prerequisite IO::Compress::Deflate 2.014 not found. We have  
> 2.011.
> Warning: prerequisite IO::Compress::Gzip 2.014 not found. We have  
> 2.011.
> Warning: prerequisite IO::Compress::Gzip::Constants 2.014 not found.  
> We have 2.011.
> Warning: prerequisite IO::Compress::RawDeflate 2.014 not found. We  
> have 2.011.
> Warning: prerequisite IO::Compress::Zip 2.014 not found. We have  
> 2.011.
> Warning: prerequisite IO::Uncompress::Gunzip 2.014 not found. We  
> have 2.011.
> Warning: prerequisite IO::Uncompress::Inflate 2.014 not found. We  
> have 2.011.
> Warning: prerequisite IO::Uncompress::RawInflate 2.014 not found. We  
> have 2.011.
> Warning: prerequisite IO::Uncompress::Unzip 2.014 not found. We have  
> 2.011.
> Writing Makefile for IO::Compress::Base
>
>
> Can you help me please.

For Red Hat/CentOS I recommend installing the required Perl modules  
from RPMForge via yum instead of using CPAN. Never had a problem with  
that. Yum automatically solves dependencies and it's easier to keep  
everything on your system up to date. I'm not familiar with solving  
dependencies using CPAN, so I can't help you there.

Nils Breunese.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup specific directory on linux

2008-09-03 Thread Alex Dehaini
You are right holger,

backuppc is reading the config.pl file and not the individual file I created
for that device when using rsync. I will go through the config.pl file once
more.

Thanks for your help.

Lex

On Wed, Sep 3, 2008 at 12:11 AM, Holger Parplies <[EMAIL PROTECTED]> wrote:

> Hi,
>
> Alex Dehaini wrote on 2008-09-02 15:58:29 + [[BackupPC-users] backup
> specific directory on linux]:
> > hi Guys,
> >
> > I am trying to backup /backups directory using rsync but when I start the
> > backup, backuppc is backing up the entire / directory of the remote
> maine.
> > this is my config
> >
> > $Conf{XferMethod} = 'rsync';
> > $Conf{RsyncClientPath} = '/usr/bin/rsync';
> > $Conf{RsyncClientCmd} = '$sshPath -q -x -l root $hostIP $rsyncPath
> > $argList+';
> > $Conf{RsyncClientRestoreCmd} = '$sshPath -q -x -l root $hostIP $rsyncPath
> > $argList+';
> > $Conf{RsyncShareName} = '/backups';
> >
> > Am I doing something wrong?
>
> yes, otherwise it would work.
> The default value for $Conf{RsyncShareName} is '/', so I'd guess "your
> config"
> is not being used. I notice you're using $hostIP and not $host. You've
> presumably got a reason for that, and that reason is probably related to
> your
> problem. As far as "your config" is concerned, it looks reasonable to me. I
> would expect it to work.
>
> You should also check the XferLOG for the command actually executed (and,
> of
> course, tell us more about your configuration).
>
> Regards,
> Holger
>



-- 
Alex Dehaini
Developer
Site - www.alexdehaini.com
Email - [EMAIL PROTECTED]
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backuppc logo

2008-09-03 Thread Alex Dehaini
Hi Guys,

Where is the backuppc logo that appears on the cgi interface stored?

-- 
Alex Dehaini
Developer
Site - www.alexdehaini.com
Email - [EMAIL PROTECTED]
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc logo

2008-09-03 Thread Alex Dehaini
Got it

On Wed, Sep 3, 2008 at 9:02 AM, Alex Dehaini <[EMAIL PROTECTED]> wrote:

> Hi Guys,
>
> Where is the backuppc logo that appears on the cgi interface stored?
>
> --
> Alex Dehaini
> Developer
> Site - www.alexdehaini.com
> Email - [EMAIL PROTECTED]
>



-- 
Alex Dehaini
Developer
Site - www.alexdehaini.com
Email - [EMAIL PROTECTED]
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup through slow line?

2008-09-03 Thread Nils Breunese (Lemonbit)
dan wrote:

> consider
> 1) do you need to backup all of the files on that system?  are there  
> some large files or mp3 or video files that can be skipped?   
> consider narrowing the scope of the backup to just appropriate files.
> 2) consider compressing the data.  With such a small pipe you will  
> be able to realize some benefit to using maximum compression with  
> rsync.
> 3) consider bumping up the available bandwidth.  I buy DSL lines in  
> many markets for my company and have never seen a situation where  
> getting a 256Kb upload cost substantially more than a 128Kb link.
> 3a) sounds like you might be using ADSL, see if SDSL is available or  
> see if there is a plan with more upload and the same download  
> capacity.
> 4) consider changing the full backup script in backuppc to NOT use  
> the no-times option that causes all of the files to be transfered.   
> This would make this essentially an incremental backup that sticks  
> around.  I have great success in doing 1 full backup and a month  
> worth of incrementals.  Saves a TON of bandwidth and shortens  
> backups considerably.  I can backup a server with 4GB of data that  
> changes infrequently in about 10Minutes via incrementals vs 3 Hours  
> for a full over a 256Kb link.  The vast majority of the time is  
> search for and transfering the file list because the server has  
> email which is so small.  This server has nearly 1 million files  
> which gives rsync a workout.

I'd try excluding some large directories, make sure that the backups  
succeed and then slowly remove the excludes one by one and build up  
your full backup that way. Once everything is on the remote backup  
server things might go smoother.

Nils Breunese.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc logo

2008-09-03 Thread Nils Breunese (Lemonbit)
Alex Dehaini wrote:

> Where is the backuppc logo that appears on the cgi interface stored?

Some ideas:

1. Right click the image in your browser and copy the image location.  
That should give you a clue.
2. The BackupPC installer script asks for an 'Apache image directory'  
and the logo.gif file is stored there. It's /var/www/html/BackupPC/ 
logo.gif on our servers.
3. If you used a distribution package then you may not have chosen  
this location yourself. Ask your package manager for a file listing  
for the BackupPC package and grep it for logo.gif.
4. A plain 'locate logo.gif' could help if you don't have thousands of  
files named like that on your machine.

Nils Breunese.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] dump failed: can't find Compress::Zlib

2008-09-03 Thread nadia kheffache
sorry, i just reply for your email



--- En date de : Mer 3.9.08, Nils Breunese (Lemonbit) <[EMAIL PROTECTED]> a 
écrit :
De: Nils Breunese (Lemonbit) <[EMAIL PROTECTED]>
Objet: Re: [BackupPC-users] dump failed: can't find Compress::Zlib
À: [EMAIL PROTECTED]
Date: Mercredi 3 Septembre 2008, 11h25

Hello Nadia,

> Thank you for your response, but i have already installed backuppc  
> on jun and i have installed all packages using yum, i backup my file  
> with no problem by using rsyncd and smb,
> the problem occurred yesterday.
>
> Now, can i reinstall Backuppc? or can i reinstall only the perl  
> modul after uninstall?

Please reply to the mailinglist so discussions are public and can be  
archived.

Nils Breunese.



  -
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup through slow line?

2008-09-03 Thread Rob Owens
>> Christian Völker wrote:
>>> Now I want to backup these servers. Obviously it takes more than 24
>>> hours to perform a full backup, so the running backup fails after 24hours.
>>>
>>> BackupPC now schedules the next full backupwhich runs for 24h and
>>> fails...and so on.
> 
Try setting ClientTimeout (on the Backup Settings tab) to something
larger than the default 72000.  72000 seconds is 20 hours, which is
approximately how long you say it takes your backup to fail.

I recommend switching it back after you've got a complete full.

Of course the other option is to put the BackupPC server on the same
local network as the machines you're backing up in order to get your
first backup, and then move it off-site.

-Rob


The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. If you are not the addressee, any disclosure, reproduction,
copying, distribution, or other dissemination or use of this transmission in
error please notify the sender immediately and then delete this e-mail.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted lost, destroyed, arrive late or
incomplete, or contain viruses.
The sender therefore does not accept liability for any errors or omissions
in the contents of this message which arise as a result of e-mail
transmission. If verification is required please request a hard copy
version.




-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup through slow line?

2008-09-03 Thread Rob Owens
Christian Völker wrote:
> |> Assume, the full backup is finished after two weeks- will the next full
> |> backup take the same amount of time?
> | If you are using rsync it will be much faster next time, sending only
> | the changes.
> If so, again the question what is then the difference between a full
> rsync backup and an incremental one? I think I'll have to read the docs
> again to understand this...tried already several times, didn't succesed ;-)
> 
With rsync, the difference between a full and incremental has more to do
with processor and disk activity than with bandwidth.  Both use about
the same bandwidth.  If you ask me, it doesn't make sense to call them
"full" and "incremental", but I guess those terms are holdovers from the
days when BackupPC didn't offer rsync as a transport.

-Rob


The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. If you are not the addressee, any disclosure, reproduction,
copying, distribution, or other dissemination or use of this transmission in
error please notify the sender immediately and then delete this e-mail.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted lost, destroyed, arrive late or
incomplete, or contain viruses.
The sender therefore does not accept liability for any errors or omissions
in the contents of this message which arise as a result of e-mail
transmission. If verification is required please request a hard copy
version.




-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] hanging on building file list

2008-09-03 Thread Michael Cockrell
Hi all,

 

I have installed backuppc recently on about 15 servers. 12 are linux
(different flavors) and 3 windows. The windows are using rsyncd whereas the
linux ones are using rsync. Everything is working great on all but two
servers. For some reason on two of the servers it never initiates a file
transfer. It seems like it just hangs while building the file list and it
will stay like that forever. I have yet to get a complete backup of these
servers. I don't know if this has anything to do with it but these are both
web servers hosting php sites. If anyone has any ideas please let me know.

 

Michael

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] hanging on building file list

2008-09-03 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Michael Cockrell wrote:
> Hi all,
> 
> I have installed backuppc recently on about 15 servers. 12 are linux
> (different flavors) and 3 windows. The windows are using rsyncd whereas
> the linux ones are using rsync. Everything is working great on all but
> two servers. For some reason on two of the servers it never initiates a
> file transfer. It seems like it just hangs while building the file list
> and it will stay like that forever. I have yet to get a complete backup
> of these servers. I don’t know if this has anything to do with it but
> these are both web servers hosting php sites. If anyone has any ideas
> please let me know.


Nothing specific, but have you tried the following:
1) Running a fsck on the machines having problems
2) Checking the log files on the problem machines
3) Running something like tar -cvf - / > /dev/null
4) Running memory tests on the machines
5) Using strace to see what the rsync processes on the problem machines
are doing
6) If the disks are IDE or SATA you may be able to use smart to check
the drives

Those are the main ones I can think of right now, let us know how they
go, and someone else might have some extra suggestions as well

BTW, your description, while probably quite accurate, lacks a lot of
detail. Perhaps you don't know how to find additional detail, so a
couple of clues (if you don't need them someone else might who might see
this)...

1) Use tcpdump to see if the network is doing things, use the -s and -A
flags depending on what protocol you are using to see more info.

2) Use wireshark to analyse the flow of data throughout the
conversation, it can sometimes pinpoint strange IP flags or hiccups

3) Use strace, especially the -ff and -o flags are useful

4) Always read the man page, you can usually get some sort of debug info
from programs

5) Read the man pages for the above commands to learn about some of
these basic building blocks of diagnostics... I've found them to be
exceptionally helpful over the years, but as always, there is more than
one way to do things, and these days, more than one tool for the same
job

Hope it helps anyway...

Regards,
Adam
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIvqArGyoxogrTyiURAnd0AKCetr/sPb1fsRV4HKs6Zmwhd4h1swCgktCx
xFKGV1LUx6aAE8sHiHbgys4=
=VGxd
-END PGP SIGNATURE-

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backup hangs, CPU usage high

2008-09-03 Thread David Koski
Lately, one machine has started to hang on backup.  It was a Linux machine, 
now it is a Windows 2003 Server machine that hangs.  The state of the machine 
stays at "backup in progress" but there is no network traffic and the CPU 
usage is at 100 percent, with two BackupPC_dump processes, one using slightly 
more CPU than the other but with a total of 100 percent.  I have changed the 
RsyncArgs and RsyncRestoreArgs options for "--devices" to "-D".

Backuppc Version: 2.1.2-6
Backuppc host OS: Debian 4.0 (Etch)
kernel: 2.6.12.6-xen
All packages up to date.

Today I increased XferLogLevel to 6 and ran the dump manually and got the 
following error message in a repeating loop:

  create 0 /   0
attribSet(dir=fswsoft, file=)
attribSet(dir=fswsoft, file=)
makeSpecial(/var/lib/backuppc/pc/fp2003/new//fswsoft/, 9, )
Can't open /var/lib/backuppc/pc/fp2003/new//fswsoft/ for empty output\n

Oddly, the directory /var/lib/backuppc/pc/fp2003/new did not exist but I was 
able to create it.  I do have free inodes and I forced fsck.ext3 on the 
filesystems and they were clean.

Any suggestions appreciated.

Regards,
David Koski
[EMAIL PROTECTED]

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC_tarCreate usage

2008-09-03 Thread Gabriel Landais
Hi,
 My main hard drive failed, I have now a hard drive with all backuppc
backups, no confs... I restored my home server with 2x500GB in RAID-1,
hope it will help in future. As I'm trying to restore my data, I don't
know what would be the best practice. So I've plugged the backup drive
in my desktop PC and installed backuppc. Then I've created a symbolic
link from my /var/lib/backuppc/pc on the pc folder where I can find my
backups. I've then tried to restore thanks to BackupPC_tarCreate,
without success...

Here a tries :

[EMAIL PROTECTED]:/home/glandais$
/usr/share/backuppc/bin/BackupPC_tarCreate -t -n -1 -h localhost -s
'/etc' /* > /mnt/server/restore/restore.etc.tar
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//bin'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//boot'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//cdrom'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//dev'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//etc'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//home'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory
'/etc//initrd'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory
'/etc//initrd.img'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory
'/etc//initrd.img.old'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//lib'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory
'/etc//lost+found'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//media'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//mnt'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//proc'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//root'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//sbin'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//srv'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//sys'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//tmp'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//usr'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//var'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory
'/etc//vmlinuz'
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory
'/etc//vmlinuz.old'
Done: 1 files, 857 bytes, 4 dirs, 0 specials, 23 errors
[EMAIL PROTECTED]:/home/glandais$ tar -tvf /mnt/server/restore/restore.etc.tar
drwxr-xr-x root/root 0 2007-10-24 23:13 ./opt/
drwxr-xr-x root/root 0 2007-10-24 23:13 ./opt/munin/
-rw-r--r-- root/root   857 2007-10-24 23:13 ./opt/munin/munin-node.conf
drwxr-xr-x root/root 0 2007-10-24 23:13 ./opt/munin/plugin-conf.d/
drwxr-xr-x root/root 0 2007-10-24 23:13 ./opt/munin/plugins/

I just want to extract backups into a file structure, no way?
Cheers
Gabriel

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarCreate usage

2008-09-03 Thread Tino Schwarze
On Wed, Sep 03, 2008 at 08:50:33PM +0200, Gabriel Landais wrote:

> [EMAIL PROTECTED]:/home/glandais$
> /usr/share/backuppc/bin/BackupPC_tarCreate -t -n -1 -h localhost -s
> '/etc' /* > /mnt/server/restore/restore.etc.tar
> /usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory '/etc//bin'

The share name might be just '/'. Look at the top directory of a backup.
If there is a 'f%2f' file in there, the share name is '/'. Otherwise
this is the share name. Maybe only 'etc'?

> I just want to extract backups into a file structure, no way?

BackupPC_tarCreate is the right way.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.craniosacralzentrum.de
www.forteego.de

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup through slow line?

2008-09-03 Thread dan
full or incremental really discribes how they are stored on backuppc.  a
full will hang around longer than an incremental.  As far as the actual
transfer, the only real difference is that a 'full' doesnt skipp files that
have the same mtime where an incremental skips those files.  This causes a
full to hit the CPU on both sides more but it doesnt use very much more
bandwidth as the only added data transfer is the checksum of those files.

On Wed, Sep 3, 2008 at 5:27 AM, Rob Owens <[EMAIL PROTECTED]>wrote:

> Christian Völker wrote:
> > |> Assume, the full backup is finished after two weeks- will the next
> full
> > |> backup take the same amount of time?
> > | If you are using rsync it will be much faster next time, sending only
> > | the changes.
> > If so, again the question what is then the difference between a full
> > rsync backup and an incremental one? I think I'll have to read the docs
> > again to understand this...tried already several times, didn't succesed
> ;-)
> >
> With rsync, the difference between a full and incremental has more to do
> with processor and disk activity than with bandwidth.  Both use about
> the same bandwidth.  If you ask me, it doesn't make sense to call them
> "full" and "incremental", but I guess those terms are holdovers from the
> days when BackupPC didn't offer rsync as a transport.
>
> -Rob
> 
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. If you are not the addressee, any disclosure, reproduction,
> copying, distribution, or other dissemination or use of this transmission
> in
> error please notify the sender immediately and then delete this e-mail.
> E-mail transmission cannot be guaranteed to be secure or error free as
> information could be intercepted, corrupted lost, destroyed, arrive late or
> incomplete, or contain viruses.
> The sender therefore does not accept liability for any errors or omissions
> in the contents of this message which arise as a result of e-mail
> transmission. If verification is required please request a hard copy
> version.
>
> 
>
>
> -
> This SF.Net email is sponsored by the Moblin Your Move Developer's
> challenge
> Build the coolest Linux based applications with Moblin SDK & win great
> prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] idea

2008-09-03 Thread dan
I have a thought here, thought id run it through the users list before
dropping it on the devs.

My idea was to add a small step to the backuppc process to validate that all
files that should be transfered were transfered and automatically flag the
backup as a partial if there is a discrepancy.

The idea is to simply compare a list of files with some details such as
mtime, size and filename. basically, have a post dump command to run a find
against each share to be backed up on the remote side and on the backuppc
side then compare the files.

The reason for this is that I have found that backuppc can miss a file every
now any then without an error or warning.
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] hanging on building file list

2008-09-03 Thread John Rouillard
On Thu, Sep 04, 2008 at 12:33:15AM +1000, Adam Goryachev wrote:
> Michael Cockrell wrote:
> > I have installed backuppc recently on about 15 servers. 12 are linux
> > (different flavors) and 3 windows. The windows are using rsyncd whereas
> > the linux ones are using rsync. Everything is working great on all but
> > two servers. For some reason on two of the servers it never initiates a
> > file transfer. It seems like it just hangs while building the file list
> > and it will stay like that forever. I have yet to get a complete backup
> > of these servers. I don???t know if this has anything to do with it but
> > these are both web servers hosting php sites. If anyone has any ideas
> > please let me know.
> [...]
> BTW, your description, while probably quite accurate, lacks a lot of
> detail. Perhaps you don't know how to find additional detail, so a
> couple of clues (if you don't need them someone else might who might see
> this)...
> 
> 1) Use tcpdump to see if the network is doing things, use the -s and -A
> flags depending on what protocol you are using to see more info.
> 
> 2) Use wireshark to analyse the flow of data throughout the
> conversation, it can sometimes pinpoint strange IP flags or hiccups
> 
> 3) Use strace, especially the -ff and -o flags are useful

Use lsof -p pid of the rsync process (on the I assume linux servers)
to see what files are open/being processed. If you are on windows look
into process explorer from sysinternals (now microsoft).

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] idea

2008-09-03 Thread Carl Wilhelm Soderstrom
On 09/03 02:29 , dan wrote:
> The idea is to simply compare a list of files with some details such as
> mtime, size and filename. basically, have a post dump command to run a find
> against each share to be backed up on the remote side and on the backuppc
> side then compare the files.

This is a decent idea; tho I think it would be best to make it optional.
Some people will not want the overhead in cases where runtime is a precious
commodity.

Might be good to make it a 'warning' which appears on the host status page,
rather than call it a 'partial'. Thoughts?

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] idea

2008-09-03 Thread Tino Schwarze
On Wed, Sep 03, 2008 at 02:29:53PM -0600, dan wrote:
> I have a thought here, thought id run it through the users list before
> dropping it on the devs.
> 
> My idea was to add a small step to the backuppc process to validate that all
> files that should be transfered were transfered and automatically flag the
> backup as a partial if there is a discrepancy.
> 
> The idea is to simply compare a list of files with some details such as
> mtime, size and filename. basically, have a post dump command to run a find
> against each share to be backed up on the remote side and on the backuppc
> side then compare the files.
> 
> The reason for this is that I have found that backuppc can miss a file every
> now any then without an error or warning.

There should be a warning in the XferLog about files which disappeared
during the transfer.

Also, the extra step would require a lot of computation (building the
file list is pretty expensive if you're backing up several hundreds of
thousands of files). After all, the extra file list should not be
neccessary because BackupPC shouldn't miss a file in the first place.
:-| And, BTW, building a file list is only possible for rsync transfers.

Where did it miss any files? What transport method are you using? When
did files get missed?

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.craniosacralzentrum.de
www.forteego.de

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] idea

2008-09-03 Thread Les Mikesell
dan wrote:
> I have a thought here, thought id run it through the users list before 
> dropping it on the devs.
> 
> My idea was to add a small step to the backuppc process to validate that 
> all files that should be transfered were transfered and automatically 
> flag the backup as a partial if there is a discrepancy. 
> 
> The idea is to simply compare a list of files with some details such as 
> mtime, size and filename. basically, have a post dump command to run a 
> find against each share to be backed up on the remote side and on the 
> backuppc side then compare the files.
> 
> The reason for this is that I have found that backuppc can miss a file 
> every now any then without an error or warning.

The only time you should see this is on non-rsync incrementals where the 
timestamp on the files in question are backdated to before the previous 
full.  Do you have missed files without a log entry in other situations?

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarCreate usage

2008-09-03 Thread Les Mikesell
Gabriel Landais wrote:
> Hi,
>  My main hard drive failed, I have now a hard drive with all backuppc
> backups, no confs... I restored my home server with 2x500GB in RAID-1,
> hope it will help in future. As I'm trying to restore my data, I don't
> know what would be the best practice. So I've plugged the backup drive
> in my desktop PC and installed backuppc. Then I've created a symbolic
> link from my /var/lib/backuppc/pc on the pc folder where I can find my
> backups. I've then tried to restore thanks to BackupPC_tarCreate,
> without success...
> 
> Here a tries :
> 
> [EMAIL PROTECTED]:/home/glandais$
> /usr/share/backuppc/bin/BackupPC_tarCreate -t -n -1 -h localhost -s
> '/etc' /* > /mnt/server/restore/restore.etc.tar

The shell is going to expand that /* to whatever it sees in the current 
running system, which probably isn't what you intended.  Was the backup 
only done of /etc or are you trying to only extract /etc?

> I just want to extract backups into a file structure, no way?

Yes, you just need the right command line.  I'd guess that you want a 
'.' instead of that /*.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup through slow line?

2008-09-03 Thread Holger Parplies
Hi,

dan wrote on 2008-09-03 14:17:00 -0600 [Re: [BackupPC-users] Backup through 
slow line?]:
> On Wed, Sep 3, 2008 at 5:27 AM, Rob Owens <[EMAIL PROTECTED]>wrote:
> 
> > Christian Völker wrote:
> > > [...] again the question what is then the difference between a full
> > > rsync backup and an incremental one? [...]
> > >
> > With rsync, the difference between a full and incremental has more to do
> > with processor and disk activity than with bandwidth.  Both use about
> > the same bandwidth.  If you ask me, it doesn't make sense to call them
> > "full" and "incremental", but I guess those terms are holdovers from the
> > days when BackupPC didn't offer rsync as a transport.
>
> full or incremental really discribes how they are stored on backuppc.  a
> full will hang around longer than an incremental.  As far as the actual
> transfer, the only real difference is that a 'full' doesnt skipp files that
> have the same mtime where an incremental skips those files.  This causes a
> full to hit the CPU on both sides more but it doesnt use very much more
> bandwidth as the only added data transfer is the checksum of those files.

yes, there are many different ways to look at this. I'll add one.

I keep repeating this, so you might all be bored, but still:

1.) Full backups make an exact image of your data set, as defined by your
configuration (meaning the image does not include things you are
deliberately excluding).

2.) Incremental backups are a trade-off, cutting down resource usage at the
price of exactness.

Both of these points are true for rsync just as much as tar or smb. rsync is
smarter than tar/smb, so

a) rsync full backups are only minimally more expensive than incrementals in
   terms of bandwidth. Still, every file needs to be completely read from disk
   on both sides, so there is a good reason to offer an "incremental" mode as
   a speedup.

b) rsync incremental backups are *far less* likely to *not* get a precise
   image of your data set. For practical purposes, the chance are probably
   negligible.

When you factor pooling into the equasion, this means that the difference
between rsync full and incremental backups is smaller than for tar/smb
backups, which is yet far smaller than for tape backups for instance. Or, the
other way around, for tape backups the difference is obvious, for tar/smb you
can still see it clearly, while for rsync backups you have to look through a
microscope. But it's still the same difference, and it's still there.

I suppose it is for this reason that BackupPC applies the same rules for
backup dependency and requirement of regular full backups to rsync as to the
other transfer methods.

Regards,
Holger

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarCreate usage

2008-09-03 Thread Vincent Fleuranceau
>> [EMAIL PROTECTED]:/home/glandais$
>> /usr/share/backuppc/bin/BackupPC_tarCreate -t -n -1 -h localhost -s
>> '/etc' /* > /mnt/server/restore/restore.etc.tar
>
> The shell is going to expand that /* to whatever it sees in the  
> current
> running system, which probably isn't what you intended.  Was the  
> backup
> only done of /etc or are you trying to only extract /etc?
>
>> I just want to extract backups into a file structure, no way?
>
> Yes, you just need the right command line.  I'd guess that you want a
> '.' instead of that /*.

I use this in a shell script (where of course $HOST and $SHARE are  
defined):
cd /mnt/external_hd
sudo -u backuppc /usr/share/backuppc/bin/BackupPC_tarCreate -n -1 -h  
$HOST -s $SHARE / | tar xf -

where $SHARE is the name of the share exactly as it appears in the  
host's configuration file, in the $Conf{TarShareName},  
$Conf{RsyncShareName} or $Conf{SmbShareName} variable.

In fact, don't see $SHARE as an actual directory but as a "root" or  
"mount point" instead.

And in my example, the trailing / is a directory and means "top  
directory relative to $SHARE".

So, if you have:
$Conf{TarShareName} = ['/'];

To extract /etc, you need to execute:
BackupPC_tarCreate -n -1 -h $HOST -s '/' /etc | > /mnt/server/restore/ 
restore.etc.tar

And if you have:
$Conf{TarShareName} = ['/etc'];

To extract /etc, you need to execute:
BackupPC_tarCreate -n -1 -h $HOST -s '/etc' / | > /mnt/server/restore/ 
restore.etc.tar

I don't use tar, so I must be wrong but this is how I see it.

-- Vincent




-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarCreate usage

2008-09-03 Thread Gabriel Landais
Tanks all for your answers, I try asap!
Cheers
Gabriel

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup through slow line?

2008-09-03 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Holger Parplies wrote:
> a) rsync full backups are only minimally more expensive than incrementals in
>terms of bandwidth. Still, every file needs to be completely read from disk
>on both sides, so there is a good reason to offer an "incremental" mode as
>a speedup.

BTW, 2 x rsync incrementals of the same level will transfer more data
than one full + one incremental. So for example, doing 6 incrementals
followed by a full backup can in fact transfer a lot more data than
doing 7 full backups.

eg, if a file changes after the full backup, then each incremental
backup will re-transfer those changes. While a full will transfer the
changes but the following full/incremental will not re-transfer those
changes.

Regards,
Adam
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIvxX3GyoxogrTyiURAiV9AJ4+GJ8GNovw9zn0Wu9MXX+ljcu+gwCgiT9c
qGiiiczNXli+FECyNHupcXE=
=NKkk
-END PGP SIGNATURE-

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Ping too slow

2008-09-03 Thread Andrew
A few days ago I noticed that none of my hosts are backing up. All but
two give the error, "no ping (ping too slow: 38.94msec (threshold is
35msec))" -- or some similar ping.

One such host is named "shipping" in backuppc. The thing is, I can ping
from the BackupPC server with no problem:
$ nmblookup shipping
192.168.111.126 shipping<00>
$ ping 192.168.111.126
PING 192.168.111.126 (192.168.111.126) 56(84) bytes of data.
...
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.130/0.138/0.151/0.016 ms

so, as you can see, the ping is measured in tenths of milliseconds, not
even close to the reported 38+ms. Given that, I can't help but think
that backuppc is doing something very odd, but I have no idea what that
might be. Could someone give me a hand?

-Andrew
[EMAIL PROTECTED]


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup through slow line?

2008-09-03 Thread Holger Parplies
Hi,

Adam Goryachev wrote on 2008-09-04 08:55:51 +1000 [Re: [BackupPC-users] Backup 
through slow line?]:
> [...]
> BTW, 2 x rsync incrementals of the same level will transfer more data
> than one full + one incremental. So for example, doing 6 incrementals
> followed by a full backup can in fact transfer a lot more data than
> doing 7 full backups.

a while ago, rsync incrementals used to be based on the last backup of the
next lower level (i.e. full backup for level 1 incremental), while rsync full
backups were based on the last backup of any level (i.e. the previous backup).
Craig was considering changing this so that any rsync backup would always be
based on the previous backup. I don't know if this has happened yet (can't
find anything in the 3.1.0 changelog), or even if Craig is still considering
it.

I would argue that for a full backup, it is ok to base it on the previous
backup (there is no risk to the integrity, and it will probably transfer less
data), while for an incremental the possibility of missing changes might be
mitigated by referencing the previous backup of next lower level rather than
the previous backup (but there are no guarantees, so it's only a vague
possibility; then again, what other meaning do multilevel rsync incremental
backups have?) - meaning I probably wouldn't change anything.

Presuming nothing has changed (yet), it would in fact be optimal (regarding
bandwidth) to use alternating full and incremental backups. Nevertheless, it
depends on the individual situation if "a bit less bandwidth" is worth "much
more disk accesses".

Regards,
Holger

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] idea

2008-09-03 Thread dan
this was on an rsync incremental.  There was no error in the Xfers log, it
is like the file was not there.  The atime on the file is from before the
backup so I know the file was there.  I have only seen this one time.

its true that only rsync actually does a file list but that doesnt mean that
a seperate file list process could not be run  either with a find command
over ssh, or with a locally mounted tar backup.

The idea of marking it partial is that backuppc would then kick in on the
next wakeup and try to finish the job because there was not a completed
backup within the schedule.   Then the next backup would have very little to
transfer and may pick up the missing file.



On Wed, Sep 3, 2008 at 3:16 PM, Les Mikesell <[EMAIL PROTECTED]> wrote:

> dan wrote:
>
>> I have a thought here, thought id run it through the users list before
>> dropping it on the devs.
>>
>> My idea was to add a small step to the backuppc process to validate that
>> all files that should be transfered were transfered and automatically flag
>> the backup as a partial if there is a discrepancy.
>> The idea is to simply compare a list of files with some details such as
>> mtime, size and filename. basically, have a post dump command to run a find
>> against each share to be backed up on the remote side and on the backuppc
>> side then compare the files.
>>
>> The reason for this is that I have found that backuppc can miss a file
>> every now any then without an error or warning.
>>
>
> The only time you should see this is on non-rsync incrementals where the
> timestamp on the files in question are backdated to before the previous
> full.  Do you have missed files without a log entry in other situations?
>
> --
>  Les Mikesell
>   [EMAIL PROTECTED]
>
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ping too slow

2008-09-03 Thread dan
You are giving the wrong info here so you might be looking at it wrong.
that 3002ms is the total time the ping operation took.  This isnt really
comparable to the actual ping times.

here:

PING www.yahoo-ht3.akadns.net (209.131.36.158) 56(84) bytes of data.
64 bytes from f1.www.vip.sp1.yahoo.com (209.131.36.158): icmp_seq=1 ttl=51
time=65.8 ms
64 bytes from f1.www.vip.sp1.yahoo.com (209.131.36.158): icmp_seq=2 ttl=51
time=61.0 ms
64 bytes from f1.www.vip.sp1.yahoo.com (209.131.36.158): icmp_seq=3 ttl=51
time=74.0 ms
64 bytes from f1.www.vip.sp1.yahoo.com (209.131.36.158): icmp_seq=4 ttl=51
time=92.5 ms

--- www.yahoo-ht3.akadns.net ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 61.018/73.364/92.532/12.015 ms

see this ping times are 65.8ms - 92.5ms but the total times is 3005ms.

here is a ping on my local LAN

PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.10 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=1.03 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=1.04 ms
64 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=1.97 ms

--- 192.168.1.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 1.039/1.290/1.975/0.396 ms

backuppc would not backuppc the host "yahoo" because of the high ping times,
all greater than 35ms but it would backup my little router because the pings
< 2ms

can you post your actual ping times?


On Wed, Sep 3, 2008 at 5:17 PM, Andrew <[EMAIL PROTECTED]> wrote:

> A few days ago I noticed that none of my hosts are backing up. All but
> two give the error, "no ping (ping too slow: 38.94msec (threshold is
> 35msec))" -- or some similar ping.
>
> One such host is named "shipping" in backuppc. The thing is, I can ping
> from the BackupPC server with no problem:
> $ nmblookup shipping
> 192.168.111.126 shipping<00>
> $ ping 192.168.111.126
> PING 192.168.111.126 (192.168.111.126) 56(84) bytes of data.
> ...
> 4 packets transmitted, 4 received, 0% packet loss, time 3002ms
> rtt min/avg/max/mdev = 0.130/0.138/0.151/0.016 ms
>
> so, as you can see, the ping is measured in tenths of milliseconds, not
> even close to the reported 38+ms. Given that, I can't help but think
> that backuppc is doing something very odd, but I have no idea what that
> might be. Could someone give me a hand?
>
> -Andrew
> [EMAIL PROTECTED]
>
>
> -
> This SF.Net email is sponsored by the Moblin Your Move Developer's
> challenge
> Build the coolest Linux based applications with Moblin SDK & win great
> prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ping too slow

2008-09-03 Thread Holger Parplies
Hi Dan,

you might consider reading what you're replying to, which very much happens
implicitly if you don't top-post.

dan wrote on 2008-09-03 19:06:07 -0600 [Re: [BackupPC-users] Ping too slow]:
> On Wed, Sep 3, 2008 at 5:17 PM, Andrew <[EMAIL PROTECTED]> wrote:
> 
> > A few days ago I noticed that none of my hosts are backing up. All but
> > two give the error, "no ping (ping too slow: 38.94msec (threshold is
> > 35msec))" -- or some similar ping.
> >
> > One such host is named "shipping" in backuppc. The thing is, I can ping
> > from the BackupPC server with no problem:
> > [...]
> > $ ping 192.168.111.126
> > PING 192.168.111.126 (192.168.111.126) 56(84) bytes of data.
> > ...
> > 4 packets transmitted, 4 received, 0% packet loss, time 3002ms
> > rtt min/avg/max/mdev = 0.130/0.138/0.151/0.016 ms

This is the interesting line which says it all: ping times between 0.130ms and
0.151ms which is, indeed, orders of magnitude less than 38.94ms. In
particular, BackupPC::Lib::CheckHostAlive picks out the average value, which
would be 0.138ms in this case.

Hi Andrew,

> > Given that, I can't help but think that backuppc is doing something very
> > odd, but I have no idea what that might be.

it almost seems so :-|.

What are $Conf{PingCmd} and $Conf{PingMaxMsec} set to? Hmm, $Conf{PingMaxMsec}
is 35, as the error shows. You could increase $Conf{PingMaxMsec}, but that's
not a real fix.

What ping command is actually executed? It appears you need to run
BackupPC_dump with the -v flag to find that out ... (something like

sudo -u backuppc /usr/local/BackupPC/bin/BackupPC_dump -vf shipping

).

One other possibility would be that your ping times are not in fact always
that low (due to a network problem maybe). How many backups are you running in
parallel? Can you try pinging in a situation identical to when BackupPC tries
to? You could add a DumpPreUserCmd with a ping - that would be logged - just
don't forget the -c flag ;-) and possibly -w and -i.

> here:
> 
> PING www.yahoo-ht3.akadns.net (209.131.36.158) 56(84) bytes of data.
> 64 bytes from f1.www.vip.sp1.yahoo.com (209.131.36.158): icmp_seq=1 ttl=51
> time=65.8 ms
> 64 bytes from f1.www.vip.sp1.yahoo.com (209.131.36.158): icmp_seq=2 ttl=51
> time=61.0 ms
> 64 bytes from f1.www.vip.sp1.yahoo.com (209.131.36.158): icmp_seq=3 ttl=51
> time=74.0 ms
> 64 bytes from f1.www.vip.sp1.yahoo.com (209.131.36.158): icmp_seq=4 ttl=51
> time=92.5 ms
> 
> --- www.yahoo-ht3.akadns.net ping statistics ---
> 4 packets transmitted, 4 received, 0% packet loss, time 3005ms
> rtt min/avg/max/mdev = 61.018/73.364/92.532/12.015 ms
> 
> see this ping times are 65.8ms - 92.5ms but the total times is 3005ms.

Dan, it seems you're looking at the wrong info. Ping prints a summary line
(right above your analysis), and ping doesn't get the minimum time wrong: it's
61.018ms (second ping), not 65.8ms (first ping) (just a detail, but it
demonstrates why you should look at the summary line).

> can you post your actual ping times?

No need for that.

Regards,
Holger

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ping too slow

2008-09-03 Thread Craig Barratt
Andrew writes:

> A few days ago I noticed that none of my hosts are backing up. All but
> two give the error, "no ping (ping too slow: 38.94msec (threshold is
> 35msec))" -- or some similar ping.
> 
> One such host is named "shipping" in backuppc. The thing is, I can ping
> from the BackupPC server with no problem:
> $ nmblookup shipping
> 192.168.111.126 shipping<00>
> $ ping 192.168.111.126
> PING 192.168.111.126 (192.168.111.126) 56(84) bytes of data.
> ...
> 4 packets transmitted, 4 received, 0% packet loss, time 3002ms
> rtt min/avg/max/mdev = 0.130/0.138/0.151/0.016 ms

BackupPC should parse this last line and extract 0.138msec as
the round trip time.

To see exactly the command it is running and the output it gets,
run:

su backuppc
BackupPC_dump -f -v shipping

Hit ^C after you get past the ping output and parsed result.
What ping output do you get?

In the mean time increase $Conf{PingMaxMsec} to get backups
running again.

Craig

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup through slow line?

2008-09-03 Thread Tino Schwarze
On Thu, Sep 04, 2008 at 12:23:56AM +0200, Holger Parplies wrote:

> I keep repeating this, so you might all be bored, but still:

[...]

> When you factor pooling into the equasion, this means that the difference
> between rsync full and incremental backups is smaller than for tar/smb
> backups, which is yet far smaller than for tape backups for instance. Or, the
> other way around, for tape backups the difference is obvious, for tar/smb you
> can still see it clearly, while for rsync backups you have to look through a
> microscope. But it's still the same difference, and it's still there.

I like that explanation! :-) IMHO it should go to the Wiki.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.craniosacralzentrum.de
www.forteego.de

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/