Re: [BackupPC-users] Status and stop backup from command line

2010-12-09 Thread Keith Edmunds
Robin, thank you: exactly what I needed.

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread martin f krafft
also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 +0100]:
 I wrote two programs that might be helpful here:
 1. BackupPC_digestVerify.pl
If you use rsync with checksum caching then this program checks the
(uncompressed) contents of each pool file against the stored md4
checksum. This should catch any bit errors in the pool. (Note
though that I seem to recall that the checksum only gets stored the
second time a file in the pool is backed up so some pool files may
not have a checksum included - I may be wrong since it's been a
while...)

I did a test run of this tool and it took 12 days to run across the
pool. I cannot take the backup machine offline for so long. Is it
possible to run this while BackupPC runs in the background?

 2. BackupPC_fixLinks.pl
This program scans through both the pool and pc trees to look for
wrong, duplicate, or missing links. It can fix most errors.

And this?

How else do you suggest I run it?

Thanks,

-- 
martin | http://madduck.net/ | http://two.sentenc.es/
 
remember, half the people are below average.
 
spamtraps: madduck.bo...@madduck.net


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)
--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] FSArchiver?

2010-12-09 Thread Jonathan Schaeffer
Le 09/12/2010 06:55, hans...@gmail.com a écrit :
 I've been investigating how to backup BackupPC's filesystem,
 specifically the tree with all the hard links (BTW what's the right
 name for it, the one that's not the pool?)
 
 The goal is to be able to bring a verified-good copy of the whole
 volume off-site via a big cahuna sata drive.

I'm answering not exactly to your question, but you might be interested
in this :

If you consider using ZFS as BackupPC's filesystem, there is the awesome
combo :

zfs snapshot   # makes a snapshot of you filesystem, for instance on a
daily bases

zfs send snapshot | ssh backback zfs receive

and your filesystem will be exported on host backback AND you will be
able to travel in time by munting the daily snapshots.

Jonathan

 
 I don't have enough RAM (or time!) for rsync -H and cp -a
 
 I was originally looking at block-level partition imaging tools, from
 mdmraid (RAID1'ing to a removable drive) to dd to Acronis.
 
 I'm also looking at BackupPC_tarPCCopy, which seems great, but
 
 What I'm really looking for is to be able to just mount the resulting
 filesystem on any ol' livecd, without having to restore anything,
 reconstruct LVM/RAID etc complexities just to get at the data - the
 source volume is an LV running on a RAID6 array, but I want the target
 partition to be a normal one.
 
 I've come across this tool: http://www.fsarchiver.org/Main_Page
 
 Does anyone have experience with it?
 
 Any and all feedback/suggestions welcome.
 
 --
 This SF Dev2Dev email is sponsored by:
 
 WikiLeaks The End of the Free Internet
 http://p.sf.net/sfu/therealnews-com
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 


-- 
IUEM - Service Informatique
place Nicolas Copernic
29280 Plouzané
France
tel: +33 2 98 49 87 94


0xA8657ED2.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Different scheduling for different folders on same server

2010-12-09 Thread Ed McDonagh
All the transfer methods (smb, rsync, tar) allow you to specify which
folders you wish to back up.

The way to achieve what you want is to create two 'hosts' to be backed
up, say myserver1 and myserver2 for the server myserver. For each one
set up for just one of the two folders, with the scheduling options you
want.

Then for both, set the host alias to myserver. Job done.

We do this quite a lot to reduce the length of individual backups, or to
put in more stringent blackouts for files that mustn't be touched during
working hours (database files with no shadow copy available), and less
strict for non-critical files.

Be warned though the backup server will not think they are the same
machine, so will happily back up both folders simultaneously. To achieve
something like what you are after in terms of the schedule, you'll need
to craft the full and incremental intervals appropriately, and set them
off manually or with cron jobs.

Hope this helps.

Ed

On Wed, 2010-12-08 at 15:25 +0100, Cyril Lavier wrote:
 Hi.
 
 I have another question on Backuppc.
 
 Here is the situation :
 
 We have 2 big folders (500 and 600GB) full off small files.
 
 They are on the same server, and apparently, with backuppc, I can only 
 backup a full server.
 
 But with these two folders, a full backup lasts for more than 20 hours, 
 and these folders are expected to grow in the near future.
 
 So I would like to know if there's a way to schedule full backups like 
 this on a time span of 4 weeks
 
 1st saturday : full folder1, incremental folder2
 1st week, monday to friday : incremental folder1 and folder2
 2nd saturday : full folder2, incremental folder1
 2nd week, monday to friday : incremental folder1 and folder2
 3rd saturday : incremental folder1 and folder2
 3rd week, monday to friday : incremental folder1 and folder2
 4th saturday : incremental folder1 and folder2
 4rd week, monday to friday : incremental folder1 and folder2
 
 The important part is the first two weeks.
 
 If anybody has some ideas about how to do something like this with 
 backuppc, this could help me a lot.
 
 Thanks.
 

#
Attention:
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary, confidential and/or
privileged information and should not be copied, disclosed, distributed,
retained or used by any other party. If you are not an intended recipient
please notify the sender immediately and delete this e-mail (including
attachments and copies). 

The statements and opinions expressed in this e-mail are those of the
author and do not necessarily reflect those of the Royal Marsden NHS
Foundation Trust. The Trust does not take any responsibility for the
statements and opinions of the author.

Website: http://www.royalmarsden.nhs.uk
#

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Minimal backuppc install (without apache or web server)???

2010-12-09 Thread Jeffrey J. Kosowsky
I'm trying to run backuppc on a debian-based plugcomputer.
I would rather not install apache (or anything else extra for that
matter) -- since I only use the CLI anyway.

- Do I have to do anything special to get backuppc to work without the
  web interface (and with no apache)? Or is apache tightly integrated
  and unavoidable...

- apt-get wants to install about 55 new debian packages, I want to divide
  them into unnecessary (if no apache/gui), required (for base rsync
  method install), and optional (if using other methods)

  Is my thinking right here:
  NECESSARY:
 backuppc
 libcompress-raw-zlib-perl
 libcompress-zlib-perl
 libfile-rsyncp-perl
 libio-compress-base-perl
 libio-compress-zlib-perl
 perl-suid

  OPTIONAL:
libarchive-zip-perl (only if using BackupPC_zipCreate)
psmisc (not sure if needed???)
samba-common (for smb transport)
smbclient (for smb transport)

  UNNECESSARY
apache2
apache2-mpm-worker 
apache2-utils 
apache2.2-common
defoma
fontconfig 
fontconfig-config
libapr1
libaprutil1
libcairo2 
libdatrie0
libdirectfb-1.0-0  libfontconfig1 libfontenc1
libfreetype6 
libmysqlclient15off
libpango1.0-0
libpango1.0-common
libpixman-1-0
libpng12-0
libpq5 
librrd4 
libsysfs2 
libtalloc1
libthai-data
libthai0
libts-0.0-0
libwbclient0
libxcb-render-util0
libxcb-render0
libxfont1
libxft2 
libxrender1
mysql-common
openssl (I'm assuming this is only needed for apache)
openssl-blacklist
rrdtool
ssl-cert
ttf-dejavu
ttf-dejavu-core

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Jeffrey J. Kosowsky
martin f krafft wrote at about 09:53:25 +0100 on Thursday, December 9, 2010:
  also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 
  +0100]:
   I wrote two programs that might be helpful here:
   1. BackupPC_digestVerify.pl
  If you use rsync with checksum caching then this program checks the
  (uncompressed) contents of each pool file against the stored md4
  checksum. This should catch any bit errors in the pool. (Note
  though that I seem to recall that the checksum only gets stored the
  second time a file in the pool is backed up so some pool files may
  not have a checksum included - I may be wrong since it's been a
  while...)
  
  I did a test run of this tool and it took 12 days to run across the
  pool. I cannot take the backup machine offline for so long. Is it
  possible to run this while BackupPC runs in the background?

It can run while backuppc is running though it will obviously miss
some new files added by backuppc after you started running the
program. My routine is non-destructive (it doesn't 'fix' anything) so
it shouldn't conflict.

  
   2. BackupPC_fixLinks.pl
  This program scans through both the pool and pc trees to look for
  wrong, duplicate, or missing links. It can fix most errors.
  
  And this?
I don't think i understand the question...
(note I posted a slightly updated version on the group last night)
  
  How else do you suggest I run it?
Look at the usage info ;)
Or if you trust it to detect and fix it all in one step:
   BackupPC_fixLinks.pl -f [ optional output file to capture all the
   detections and status's]

Or to do it sequentially:
   Detect:
   BackupPC_fixlinks.pl   [output file]
   Fix:
   BackupPC_fixlinks.pl  -l [output file]

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] SPNEGO login failed: invalid parameter

2010-12-09 Thread Frank J . Gómez
Just bumping this.  I need to get this machine back on its routine.  Please
let me know if there's any additional information I can provide to help you
help me.

Thanks so much!
-Frank

2010/12/7 Frank J. Gómez fr...@crop-circle.net

 A Windows 7 laptop is failing with backup failed (No files dumped for
 share win7home).  Prior to this, one full and three incrementals completed
 successfully.

 I verified the username and password, and I've tried with Windows Firewall
 turned completely off.  The password and the machine name contain only
 letters and numbers -- letters only for the username.

 When I run:

 smbclient 13708n1\\win7home -U backuppc -E -d 3


 I get this in the output:

 Doing spnego session setup (blob length=336)

  SPNEGO login failed: Invalid parameter


 I get the same result whether I provide the correct password or not, or
 whether I use the proper share name or a nonexistent one.  When I run the
 same command against different Windows 7 laptops, the blob length is much
 shorter, and I don't get the Invalid parameter failure.

 Here's another interesting bit of information; when I run:

 smbtree -U backuppc


 I get (abridged):

 MY-WORKGROUP
 \\44Z62L1
 \\44Z62L1\win7home
 \\44Z62L1\Users
  \\44Z62L1\IPC$   Remote IPC
 \\44Z62L1\C$ Default share
  \\44Z62L1\ADMIN$ Remote Admin
 \\13708N1


 Note that there are no shares listed for 13708N1.  Furthermore, running:

 smbtree -U bsmith


 (where bsmith is the name of the laptop's primary user) gives a similar
 output.  The user can't see her own shares, even though I can use that
 command with other users to see just the shares on their respective laptops.

 I think the user must have inadvertently changed some network or sharing
 settings on her laptop, because it does not appear that her shares are being
 broadcast.  I've verified that the directory is being shared with the
 backuppc user as well as the Backup Operators group, using the win7home
 share name.

 Any suggestions?

 Thanks,
 -Frank

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] cpool weirdness - pc directories inside cpool???

2010-12-09 Thread Carl Wilhelm Soderstrom
On 12/09 12:48 , Jeffrey J. Kosowsky wrote:
 Since this error can cause major failures, I'm surprised that an email
 isn't sent out.
 In fact, I think it would be helpful that any error that causes a
 failure or potential pool corruption should be mailed to the user.

Along those same lines, I just had a conversation yesterday with a client
who was concerened about backup errors (due to files changing in the middle
of the backup or the like). 

It would be nice if XferErrors could be mailed as well; so there would be a
better chance of seeing such things. 

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] SPNEGO login failed: invalid parameter

2010-12-09 Thread Carl Wilhelm Soderstrom
On 12/09 09:53 , Frank J. Gómez wrote:
  smbclient 13708n1\\win7home -U backuppc -E -d 3
 
  I get this in the output:
 
  Doing spnego session setup (blob length=336)
 
   SPNEGO login failed: Invalid parameter

snip

 
  Here's another interesting bit of information; when I run:
 

snip

Is the share reachable from other Windows machines?
If so, what version?
What smbclient version are you using? I believe 3.4.6 or above is needed for
full interoperability with Win7; unless you apply some registry hacks to
turn off some security features. My knowlege is pretty sketchy tho.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] cpool weirdness - pc directories inside cpool???

2010-12-09 Thread Jeffrey J. Kosowsky
Carl Wilhelm Soderstrom wrote at about 09:41:03 -0600 on Thursday, December 9, 
2010:
  On 12/09 12:48 , Jeffrey J. Kosowsky wrote:
   Since this error can cause major failures, I'm surprised that an email
   isn't sent out.
   In fact, I think it would be helpful that any error that causes a
   failure or potential pool corruption should be mailed to the user.
  
  Along those same lines, I just had a conversation yesterday with a client
  who was concerened about backup errors (due to files changing in the middle
  of the backup or the like). 
  
  It would be nice if XferErrors could be mailed as well; so there would be a
  better chance of seeing such things. 
  

In fact, had I not been helping Robin by updating my scripts and then
testing them on my supposed 'clean' system, I may never have known
that the disk was terribly corrupted until perhaps I did an fsck - but
since I almost never shut the system down, I almost never do fscks...

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft wrote:
 also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 
 +0100]:
  I wrote two programs that might be helpful here:
  1. BackupPC_digestVerify.pl
 If you use rsync with checksum caching then this program checks the
 (uncompressed) contents of each pool file against the stored md4
 checksum. This should catch any bit errors in the pool. (Note
 though that I seem to recall that the checksum only gets stored the
 second time a file in the pool is backed up so some pool files may
 not have a checksum included - I may be wrong since it's been a
 while...)
 
 I did a test run of this tool and it took 12 days to run across the
 pool. I cannot take the backup machine offline for so long. Is it
 possible to run this while BackupPC runs in the background?
 
  2. BackupPC_fixLinks.pl
 This program scans through both the pool and pc trees to look for
 wrong, duplicate, or missing links. It can fix most errors.
 
 And this?

I don't know about the first one, but BackupPC_fixLinks.pl can
*definitely* be run while BackupPC runs.

For serious corruption, you may want to grab the patch I posted a
few days ago; it makes the run *much* slower, but on the plus side
it will fix more errors.

OTOH, the errors it fixes only waste disk space, they don't actually
break BackupPC's ability to function at all.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Jeffrey J. Kosowsky
Robin Lee Powell wrote at about 11:20:26 -0800 on Thursday, December 9, 2010:
  On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft wrote:
   also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 
   +0100]:
I wrote two programs that might be helpful here:
1. BackupPC_digestVerify.pl
   If you use rsync with checksum caching then this program checks the
   (uncompressed) contents of each pool file against the stored md4
   checksum. This should catch any bit errors in the pool. (Note
   though that I seem to recall that the checksum only gets stored the
   second time a file in the pool is backed up so some pool files may
   not have a checksum included - I may be wrong since it's been a
   while...)
   
   I did a test run of this tool and it took 12 days to run across the
   pool. I cannot take the backup machine offline for so long. Is it
   possible to run this while BackupPC runs in the background?
   
2. BackupPC_fixLinks.pl
   This program scans through both the pool and pc trees to look for
   wrong, duplicate, or missing links. It can fix most errors.
   
   And this?
  
  I don't know about the first one, but BackupPC_fixLinks.pl can
  *definitely* be run while BackupPC runs.
  
  For serious corruption, you may want to grab the patch I posted a
  few days ago; it makes the run *much* slower, but on the plus side
  it will fix more errors.

I would suggest instead using the version I posted last night...
It should be much faster though still slow and may avoid some issues...

  
  OTOH, the errors it fixes only waste disk space, they don't actually
  break BackupPC's ability to function at all.
  

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 03:03:37PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 11:20:26 -0800 on Thursday, December 9, 2010:
   On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft wrote:
also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 
 +0100]:
 I wrote two programs that might be helpful here:
 1. BackupPC_digestVerify.pl
If you use rsync with checksum caching then this program checks the
(uncompressed) contents of each pool file against the stored md4
checksum. This should catch any bit errors in the pool. (Note
though that I seem to recall that the checksum only gets stored the
second time a file in the pool is backed up so some pool files may
not have a checksum included - I may be wrong since it's been a
while...)

I did a test run of this tool and it took 12 days to run across the
pool. I cannot take the backup machine offline for so long. Is it
possible to run this while BackupPC runs in the background?

 2. BackupPC_fixLinks.pl
This program scans through both the pool and pc trees to look for
wrong, duplicate, or missing links. It can fix most errors.

And this?
   
   I don't know about the first one, but BackupPC_fixLinks.pl can
   *definitely* be run while BackupPC runs.
   
   For serious corruption, you may want to grab the patch I posted a
   few days ago; it makes the run *much* slower, but on the plus side
   it will fix more errors.
 
 I would suggest instead using the version I posted last night...
 It should be much faster though still slow and may avoid some
 issues...

Well, I meant that version *plus* my patch. :D

Will your new version catch the this has multiple hard links but
not into the pool error I was seeing?  (If so yay! and thank you!)

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Are all non-zero length files in the pc tree stored in the pool?

2010-12-09 Thread Jeffrey J. Kosowsky
At least all files beneath the share level that is...
My backup system is down so I can't check at the host level.

But I also wanted to confirm that this holds true not just for normal
 files but for links (soft  hard) and other special files.
(of course it's not true for directories)

Thanks

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Jeffrey J. Kosowsky
Robin Lee Powell wrote at about 12:06:24 -0800 on Thursday, December 9, 2010:
  On Thu, Dec 09, 2010 at 03:03:37PM -0500, Jeffrey J. Kosowsky wrote:
   Robin Lee Powell wrote at about 11:20:26 -0800 on Thursday, December 9, 
   2010:
 On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft wrote:
  also sprach Jeffrey J. Kosowsky backu...@kosowsky.org 
   [2010.11.17.0059 +0100]:
   I wrote two programs that might be helpful here:
   1. BackupPC_digestVerify.pl
  If you use rsync with checksum caching then this program checks 
   the
  (uncompressed) contents of each pool file against the stored md4
  checksum. This should catch any bit errors in the pool. (Note
  though that I seem to recall that the checksum only gets stored 
   the
  second time a file in the pool is backed up so some pool files 
   may
  not have a checksum included - I may be wrong since it's been a
  while...)
  
  I did a test run of this tool and it took 12 days to run across the
  pool. I cannot take the backup machine offline for so long. Is it
  possible to run this while BackupPC runs in the background?
  
   2. BackupPC_fixLinks.pl
  This program scans through both the pool and pc trees to look for
  wrong, duplicate, or missing links. It can fix most errors.
  
  And this?
 
 I don't know about the first one, but BackupPC_fixLinks.pl can
 *definitely* be run while BackupPC runs.
 
 For serious corruption, you may want to grab the patch I posted a
 few days ago; it makes the run *much* slower, but on the plus side
 it will fix more errors.
   
   I would suggest instead using the version I posted last night...
   It should be much faster though still slow and may avoid some
   issues...
  
  Well, I meant that version *plus* my patch. :D

My version does what your patch posted a couple of days does only
faster  probably better (i.e. your version may miss some cases where
there are pool dups and unlinked pc files with multiple links).


  Will your new version catch the this has multiple hard links but
  not into the pool error I was seeing?  (If so yay! and thank you!)
  


I don't know what error you are referring to. My version simple
extends to also test pc files with more than one link and fix them as
appropriate though I haven't test it.

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 03:15:41PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 12:06:24 -0800 on Thursday,
 December 9, 2010:
   On Thu, Dec 09, 2010 at 03:03:37PM -0500, Jeffrey J. Kosowsky
   wrote:
Robin Lee Powell wrote at about 11:20:26 -0800 on Thursday,
December 9, 2010:
  On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft
  wrote:
   also sprach Jeffrey J. Kosowsky backu...@kosowsky.org
   [2010.11.17.0059 +0100]:
I wrote two programs that might be helpful here:
1. BackupPC_digestVerify.pl
   If you use rsync with checksum caching then this
   program checks the (uncompressed) contents of each
   pool file against the stored md4 checksum. This
   should catch any bit errors in the pool. (Note
   though that I seem to recall that the checksum only
   gets stored the second time a file in the pool is
   backed up so some pool files may not have a
   checksum included - I may be wrong since it's been
   a while...)
   
   I did a test run of this tool and it took 12 days to run
   across the pool. I cannot take the backup machine
   offline for so long. Is it possible to run this while
   BackupPC runs in the background?
   
2. BackupPC_fixLinks.pl
   This program scans through both the pool and pc
   trees to look for wrong, duplicate, or missing
   links. It can fix most errors.
   
   And this?
  
  I don't know about the first one, but BackupPC_fixLinks.pl
  can *definitely* be run while BackupPC runs.
  
  For serious corruption, you may want to grab the patch I
  posted a few days ago; it makes the run *much* slower, but
  on the plus side it will fix more errors.

I would suggest instead using the version I posted last
night... It should be much faster though still slow and may
avoid some issues...
   
   Well, I meant that version *plus* my patch. :D
 
 My version does what your patch posted a couple of days does only
 faster  probably better (i.e. your version may miss some cases
 where there are pool dups and unlinked pc files with multiple
 links).

I repeat my assertion that you are my hero.  :)

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-09 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 23:25:32 +0100 on Thursday, December 9, 2010:
  Hi,

Welcome back!!! - I was beginning to miss you on the list...
  
  Jeffrey J. Kosowsky wrote on 2010-12-07 13:16:32 -0500 [Re: [BackupPC-users] 
  Bizarre form of cpool corruption.]:
   Robin Lee Powell wrote at about 23:46:11 -0800 on Monday, December 6, 2010:
 [...]
 So, yeah, that's really it.  They're both really there, and that's
 the right md5sum, and both the pool file and the original file have
 more than 1 hardlink count, and there's no inode match.
   
   Robin, can you just clarify the context.
   Did this apparent pool corruption only occur after running
   BackupPC_tarPCCopy or did it occur in the course of normal backuppc
   running.
   
   Because if the second then I can think of only 2 ways that you would
   have pc files with more than one link but not in the pool:
   1. File system corruption
   2. Something buggy with BackupPC_nightly
   Because files in the pc directory only get multiple links after being
   linked to the pool and files only unlinked from the pool using
   BackupPC_nightly (Craig, please correct me if I am wrong here)
  
  I'm not Craig ;-), but I can think of a third possibility (meaning files may
  get multiple links *without* being linked to the pool, providing something 
  has
  previously gone wrong):
  
  3. You have unlinked files in pc trees (as you described in a seperate
 posting - missing or incomplete BackupPC_link runs) and then run an rsync
 full backup. Identical files are linked *to the corresponding file in the
 reference backup*, not to a pool file.

A... that of course makes sense -- for some reason I was thinking
they were literally linked to the pool, but for incrementals it
really couldn't be any other way than you are saying.

This also is a very logical explanation for how it can happen if the
Backuppc linking is not working.

If I recall correctly, the first time you would do a
subsequent incremental then it should all get linked back to the pool
since they are linked not copied to the pool *unless* the file is
already in the pool in which case the new backup would be linked and
the old ones would be left orphaned. Similarly, I imagine that new
fulls would leave them stranded. Either case could explain.
 
  4. Tampering with the pool. Just for the sake of completeness. But we don't
 do that, do we? ;-)
  
  
I would never write routines that touch the pool would I? :)

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-09 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2010-12-07 13:16:32 -0500 [Re: [BackupPC-users] 
Bizarre form of cpool corruption.]:
 Robin Lee Powell wrote at about 23:46:11 -0800 on Monday, December 6, 2010:
   [...]
   So, yeah, that's really it.  They're both really there, and that's
   the right md5sum, and both the pool file and the original file have
   more than 1 hardlink count, and there's no inode match.
 
 Robin, can you just clarify the context.
 Did this apparent pool corruption only occur after running
 BackupPC_tarPCCopy or did it occur in the course of normal backuppc
 running.
 
 Because if the second then I can think of only 2 ways that you would
 have pc files with more than one link but not in the pool:
 1. File system corruption
 2. Something buggy with BackupPC_nightly
 Because files in the pc directory only get multiple links after being
 linked to the pool and files only unlinked from the pool using
 BackupPC_nightly (Craig, please correct me if I am wrong here)

I'm not Craig ;-), but I can think of a third possibility (meaning files may
get multiple links *without* being linked to the pool, providing something has
previously gone wrong):

3. You have unlinked files in pc trees (as you described in a seperate
   posting - missing or incomplete BackupPC_link runs) and then run an rsync
   full backup. Identical files are linked *to the corresponding file in the
   reference backup*, not to a pool file.
   If I remember correctly, that is. I haven't found much time for looking
   at the code (or list mails) in the last year, so I might be mistaken, but
   I'd rather contribute the thought and be corrected than wait until I find
   the time to verify it myself :).

 If the first, then presumably something is going wrong with either
 BackupPC_tarPCCopy or how it's applied...

Just in case it's not obvious, BackupPC_tarPCCopy generates a tar file that
can *only be meaningfully extracted* against a similar pool to that it was
created with (files *not referenced* by the tar file may, of course, be missing
or have different content - presuming you can find a usage example for
that ;-).

The hard links in the tar file reference pool file names for which the actual
file is (somewhat illegally, but that's really the whole point ;-) not
contained in the tar file. There is thus no way for tar to know if it is
actually linking to the intended file or a file with the same name but
different content - it is up to you to make sure the contents are correct.
You usually do that by copying the pool and running BackupPC_tarPCCopy
immediately afterwards, *without BackupPC modifying the source pool in
between*; you have probably stopped BackupPC altogether before starting the
pool copy.

BackupPC_nightly may rename pool files. If that happens after copying the pool
and before running BackupPC_tarPCCopy, (some of) the links will point to the
wrong file (with respect to the pool copy).

That said, I can't see how that would cause the unlinked pc files Robin is
observing. However, *using* a pool copy (i.e. running BackupPC on it) for
which BackupPC_tarPCCopy has stored the file contents, because it could not
find the pool file, would cause that file to remain outside the pool forever,
as long as you are using rsync and don't modify the file contents, as I
described above.

You probably know that, but I thought I'd clarify what I expect Jeffrey means
by something going wrong with how BackupPC_tarPCCopy is applied.


Oh, and of course there's always

4. Tampering with the pool. Just for the sake of completeness. But we don't
   do that, do we? ;-)


Hope that helps.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-09 Thread Les Mikesell
On 12/9/2010 4:44 PM, Jeffrey J. Kosowsky wrote:


 This also is a very logical explanation for how it can happen if the
 Backuppc linking is not working.

 If I recall correctly, the first time you would do a
 subsequent incremental then it should all get linked back to the pool
 since they are linked not copied to the pool *unless* the file is
 already in the pool in which case the new backup would be linked and
 the old ones would be left orphaned. Similarly, I imagine that new
 fulls would leave them stranded. Either case could explain.

I thought that was a difference between rsync/others.  Rsync works 
against a previous copy making direct links to anything that already 
exists so the pool copies are only for new data.  Other methods copy the 
whole file content over and don't bother looking at any earlier runs, 
just doing the hash and pool link or copy.

-- 
   Les Mikesell
lesmikes...@gmail.com


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 12:05:01AM -0500, Jeffrey J. Kosowsky wrote:
 Anyway here is the diff. I have not had time to check it much
 beyond verifying that it seems to run -- SO I WOULD TRULY
 APPRECIATE IT IF YOU CONTINUE TO TEST IT AND GIVE ME FEEDBACK.
 Also, it would be great if you would let me know approximately
 what speedup you achieved with this code vs. your original.

Yeah, I can do that.  You mind sending me a completely updated
version privately?  i.e. what you'd post to the wiki once it was
tested?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] SPNEGO login failed: invalid parameter

2010-12-09 Thread Frank J . Gómez
Smbclient version is 3.4.7.  The user and I will not again be in the office
at the same time until Tuesday, but I believe the share is inaccessible
(invisible, even) from other Windows machines.  I should mention that other
Windows 7 machines are working smoothly with BackupPC.  I think it is
something specific to her laptop's configuration.

Thanks,
-Frank

On Thu, Dec 9, 2010 at 11:30 AM, Carl Wilhelm Soderstrom 
chr...@real-time.com wrote:

 On 12/09 09:53 , Frank J. Gómez wrote:
   smbclient 13708n1\\win7home -U backuppc -E -d 3
  
   I get this in the output:
  
   Doing spnego session setup (blob length=336)
  
SPNEGO login failed: Invalid parameter

 snip

  
   Here's another interesting bit of information; when I run:
  

 snip

 Is the share reachable from other Windows machines?
 If so, what version?
 What smbclient version are you using? I believe 3.4.6 or above is needed
 for
 full interoperability with Win7; unless you apply some registry hacks to
 turn off some security features. My knowlege is pretty sketchy tho.

 --
 Carl Soderstrom
 Systems Administrator
 Real-Time Enterprises
 www.real-time.com


 --
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Another jLib/fixLinks issue.

2010-12-09 Thread Robin Lee Powell
On Tue, Dec 07, 2010 at 02:27:46PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 11:05:46 -0800 on Tuesday, December 7, 2010:
   On Tue, Dec 07, 2010 at 01:58:28PM -0500, Jeffrey J. Kosowsky wrote:
Robin Lee Powell wrote at about 15:40:04 -0800 on Monday, December 6, 
 2010:
  This is *fascinating*.
  
  From the actually-fixing-stuff part of the run, I get:
  
ERROR: 
 tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
  - Too many links if added to 59c43b51dbdd9031ba54971e359cdcec
  
  to which I say lolwut? and investigate.
  
  $ ls -li /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec*
  2159521202 -rw-r- 31999 backuppc backuppc 76046 Oct  7 08:29 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec
  2670969865 -rw-r- 31999 backuppc backuppc 76046 Oct 16 15:15 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_0
79561977 -rw-r- 31999 backuppc backuppc 76046 Oct 22 22:07 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_1
   156369809 -rw-r- 31999 backuppc backuppc 76046 Oct 31 09:06 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_2
  3389777838 -rw-r- 31999 backuppc backuppc 76046 Nov  7 09:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_3
   106188559 -rw-r- 31999 backuppc backuppc 76046 Nov 13 15:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_4
   247044591 -rw-r- 31999 backuppc backuppc 76046 Nov 19 17:20 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_5
   293083240 -rw-r- 31999 backuppc backuppc 76046 Nov 26 06:14 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_6
   513555136 -rw-r- 31999 backuppc backuppc 76046 Dec  1 19:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_7
52908307 -rw-r-  7767 backuppc backuppc 76046 Dec  4 10:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_8
  $ ls -li 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 374791856 -rw-r- 1 backuppc backuppc 76046 Dec  4 08:03 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
  
  That's a bunch of files with *thirty two thousand* hard links.
  Apparently that's a limit of some kind.  BackupPC handles this by
  adding new copies, a hack that BackupPC_fixLinks is apparently
  unaware of.

BackupPC_fixLinks does know about the limit and in fact is careful
not to exceed it (using the same hack) when it combines/rewrites
links. Other than that, I'm not sure where you think
BackupPC_fixLinks needs to be aware of it?
   
   I would expect it to not emit an ERROR there?  :)  Shouldn't it move
   to the next file, and the next, and so on, until it finds one it
   *can* link to?
   
   It emitted thousands of such ERROR lines; surely that's not good
   behaviour.
 
 Well, it was designed (and tested) for the use case where this was
 a *rare* event so that it would be interesting to signal it.
 Perhaps even then  WARN or NOTICE would have been better than
 ERROR. Indeed, that would be a good change (and you could always
 'grep -v' it out of your results).
 
 My thinking was that in the case of a messed-up pool knowing that
 some files had 32000 links would be worthy of notice of
 course, it seems like for you this is a non note-worthy
 occurrence.
 
 Now per my comments in the code, this doesn't break anything, it
 only means that the links can't be combined and so pool usage
 can't be freed up for that file. 

I'm worried we're talking past each other, so be gentle if I'm
confused.  :)

If I have thousands of such files, each copy takes up the usual
amount of space.  They *should* be linked into the pool, so as to
take up 32k times less space.  The reason I ran it in the first
place was to link unlinked files like this into the pool; in this
case, unless I'm missing something, they stayed unlinked.

Since my goal was to free up space, it's important to me.

I agree it's something of an edge case, though, and if you don't
want to fix it I'd totally understand.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Another jLib/fixLinks issue.

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 12:35:46AM -0500, Jeffrey J. Kosowsky wrote:
 Jeffrey J. Kosowsky wrote at about 13:58:28 -0500 on Tuesday, December 7, 
 2010:
   Robin Lee Powell wrote at about 15:40:04 -0800 on Monday, December 6, 2010:
 This is *fascinating*.
 
 From the actually-fixing-stuff part of the run, I get:
 
   ERROR: 
 tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
  - Too many links if added to 59c43b51dbdd9031ba54971e359cdcec
 
 to which I say lolwut? and investigate.
 
 $ ls -li /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec*
 2159521202 -rw-r- 31999 backuppc backuppc 76046 Oct  7 08:29 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec
 2670969865 -rw-r- 31999 backuppc backuppc 76046 Oct 16 15:15 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_0
   79561977 -rw-r- 31999 backuppc backuppc 76046 Oct 22 22:07 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_1
  156369809 -rw-r- 31999 backuppc backuppc 76046 Oct 31 09:06 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_2
 3389777838 -rw-r- 31999 backuppc backuppc 76046 Nov  7 09:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_3
  106188559 -rw-r- 31999 backuppc backuppc 76046 Nov 13 15:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_4
  247044591 -rw-r- 31999 backuppc backuppc 76046 Nov 19 17:20 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_5
  293083240 -rw-r- 31999 backuppc backuppc 76046 Nov 26 06:14 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_6
  513555136 -rw-r- 31999 backuppc backuppc 76046 Dec  1 19:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_7
   52908307 -rw-r-  7767 backuppc backuppc 76046 Dec  4 10:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_8
 $ ls -li 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 374791856 -rw-r- 1 backuppc backuppc 76046 Dec  4 08:03 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 
 That's a bunch of files with *thirty two thousand* hard links.
 Apparently that's a limit of some kind.  BackupPC handles this by
 adding new copies, a hack that BackupPC_fixLinks is apparently
 unaware of.
   
   BackupPC_fixLinks does know about the limit and in fact is careful not
   to exceed it (using the same hack) when it combines/rewrites links.
   Other than that, I'm not sure where you think BackupPC_fixLinks needs
   to be aware of it?
   
   To be fair, since I don't have any systems with that many hard links,
   I have not tested that use case so perhaps my code is missing
   something (I haven't looked through the logic of how BackupPC_fixLinks
   traverses chains in a while so maybe there is something there that
   needs to be adjusted for your use case but again since I haven't
   encountered it I probably have not given it enough thought)
   
 
 Robin, can you let me know in what way you think BackupPC misses
 here? It seems to me that my program does the following:

 1. It avoids calling a pool element a duplicate if the sum of the
 number of links in the duplicates exceeds the maximum link number
 (i.e. the pool duplicate is justified)
 
 2. When it fixes/combines links, it avoids exceeding the maximum
 link number and creates a new element of the md5sum chain instead.
 
 Is there any other way that maxlinks comes into play that I am
 missing?

*blink*

I was under the impression that it did *not* do creates a new
element of the md5sum chain instead..

I took the error to mean I see too many links to this file already,
so screw it, I'm giving up and leaving this file alone.

If the file *does* get linked in despite the error, then yeah,
that's totally fine, although I'd change the wording.  I read Too
many links if added to mean so I'm not going to add it.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-09 Thread Jeffrey J. Kosowsky
Robin Lee Powell wrote at about 15:24:30 -0800 on Thursday, December 9, 2010:
  On Thu, Dec 09, 2010 at 12:05:01AM -0500, Jeffrey J. Kosowsky wrote:
   Anyway here is the diff. I have not had time to check it much
   beyond verifying that it seems to run -- SO I WOULD TRULY
   APPRECIATE IT IF YOU CONTINUE TO TEST IT AND GIVE ME FEEDBACK.
   Also, it would be great if you would let me know approximately
   what speedup you achieved with this code vs. your original.
  
  Yeah, I can do that.  You mind sending me a completely updated
  version privately?  i.e. what you'd post to the wiki once it was
  tested?
  
Sure...



BackupPC_fixLinks.pl
Description: Binary data
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Another jLib/fixLinks issue.

2010-12-09 Thread Jeffrey J. Kosowsky
Robin Lee Powell wrote at about 15:35:26 -0800 on Thursday, December 9, 2010:
   Well, it was designed (and tested) for the use case where this was
   a *rare* event so that it would be interesting to signal it.
   Perhaps even then  WARN or NOTICE would have been better than
   ERROR. Indeed, that would be a good change (and you could always
   'grep -v' it out of your results).
   
   My thinking was that in the case of a messed-up pool knowing that
   some files had 32000 links would be worthy of notice of
   course, it seems like for you this is a non note-worthy
   occurrence.
   
   Now per my comments in the code, this doesn't break anything, it
   only means that the links can't be combined and so pool usage
   can't be freed up for that file. 
  
  I'm worried we're talking past each other, so be gentle if I'm
  confused.  :)
  
  If I have thousands of such files, each copy takes up the usual
  amount of space.  They *should* be linked into the pool, so as to
  take up 32k times less space.  The reason I ran it in the first
  place was to link unlinked files like this into the pool; in this
  case, unless I'm missing something, they stayed unlinked.
  
  Since my goal was to free up space, it's important to me.
  
  I agree it's something of an edge case, though, and if you don't
  want to fix it I'd totally understand.
  

I think it's neither a right nor wrong thing. For me and for probably
many average users having 32000 links is likely to be more of a sign
of something gone wrong vs. a boring everyday occurrence.
While for you I understand it is common and annoyance since it seems
to signal errors where none truly exist and it distorts the error
count to boot.

As a compromise between these use cases, I did the following:
1. Changed Error to Warn - I think it's still a good warning to
   know that there are dups that are uncorrectable though for good
   reason.

2. I stopped it from increasing the error count.

Here is the modified version:


BackupPC_fixLinks.pl
Description: Binary data
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Another jLib/fixLinks issue.

2010-12-09 Thread Jeffrey J. Kosowsky
Robin Lee Powell wrote at about 15:38:27 -0800 on Thursday, December 9, 2010:
  On Thu, Dec 09, 2010 at 12:35:46AM -0500, Jeffrey J. Kosowsky wrote:
   Jeffrey J. Kosowsky wrote at about 13:58:28 -0500 on Tuesday, December 7, 
   2010:
 Robin Lee Powell wrote at about 15:40:04 -0800 on Monday, December 6, 
   2010:
   This is *fascinating*.
   
   From the actually-fixing-stuff part of the run, I get:
   
 ERROR: 
   tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
- Too many links if added to 59c43b51dbdd9031ba54971e359cdcec
   
   to which I say lolwut? and investigate.
   
   $ ls -li /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec*
   2159521202 -rw-r- 31999 backuppc backuppc 76046 Oct  7 08:29 
   /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec
   2670969865 -rw-r- 31999 backuppc backuppc 76046 Oct 16 15:15 
   /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_0
 79561977 -rw-r- 31999 backuppc backuppc 76046 Oct 22 22:07 
   /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_1
156369809 -rw-r- 31999 backuppc backuppc 76046 Oct 31 09:06 
   /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_2
   3389777838 -rw-r- 31999 backuppc backuppc 76046 Nov  7 09:10 
   /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_3
106188559 -rw-r- 31999 backuppc backuppc 76046 Nov 13 15:10 
   /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_4
247044591 -rw-r- 31999 backuppc backuppc 76046 Nov 19 17:20 
   /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_5
293083240 -rw-r- 31999 backuppc backuppc 76046 Nov 26 06:14 
   /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_6
513555136 -rw-r- 31999 backuppc backuppc 76046 Dec  1 19:37 
   /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_7
 52908307 -rw-r-  7767 backuppc backuppc 76046 Dec  4 10:37 
   /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_8
   $ ls -li 
   /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
   374791856 -rw-r- 1 backuppc backuppc 76046 Dec  4 08:03 
   /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
   
   That's a bunch of files with *thirty two thousand* hard links.
   Apparently that's a limit of some kind.  BackupPC handles this by
   adding new copies, a hack that BackupPC_fixLinks is apparently
   unaware of.
 
 BackupPC_fixLinks does know about the limit and in fact is careful not
 to exceed it (using the same hack) when it combines/rewrites links.
 Other than that, I'm not sure where you think BackupPC_fixLinks needs
 to be aware of it?
 
 To be fair, since I don't have any systems with that many hard links,
 I have not tested that use case so perhaps my code is missing
 something (I haven't looked through the logic of how BackupPC_fixLinks
 traverses chains in a while so maybe there is something there that
 needs to be adjusted for your use case but again since I haven't
 encountered it I probably have not given it enough thought)
 
   
   Robin, can you let me know in what way you think BackupPC misses
   here? It seems to me that my program does the following:
  
   1. It avoids calling a pool element a duplicate if the sum of the
   number of links in the duplicates exceeds the maximum link number
   (i.e. the pool duplicate is justified)
   
   2. When it fixes/combines links, it avoids exceeding the maximum
   link number and creates a new element of the md5sum chain instead.
   
   Is there any other way that maxlinks comes into play that I am
   missing?
  
  *blink*
  
  I was under the impression that it did *not* do creates a new
  element of the md5sum chain instead..
  
  I took the error to mean I see too many links to this file already,
  so screw it, I'm giving up and leaving this file alone.
  
  If the file *does* get linked in despite the error, then yeah,
  that's totally fine

I believe the logic is that that error is only seen when the file is
already linked into the pool either pre-existing or earlier in the run
-- so all that happens is that it doesn't consolidate already existing
pool links. And that is why I commented it as it is

But feel free to look at the code and/or your results to make sure
that I am remembering this right and that I coded it right in the
first place. Fresh eyes can never hurt...

  although I'd change the wording.  I read Too
  many links if added to mean so I'm not going to add it.
  


Already done!

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:

Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 06:41:22PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 15:24:30 -0800 on Thursday, December 9, 2010:
   On Thu, Dec 09, 2010 at 12:05:01AM -0500, Jeffrey J. Kosowsky wrote:
Anyway here is the diff. I have not had time to check it much
beyond verifying that it seems to run -- SO I WOULD TRULY
APPRECIATE IT IF YOU CONTINUE TO TEST IT AND GIVE ME FEEDBACK.
Also, it would be great if you would let me know approximately
what speedup you achieved with this code vs. your original.
   
   Yeah, I can do that.  You mind sending me a completely updated
   version privately?  i.e. what you'd post to the wiki once it was
   tested?
   
 Sure...

Well, initially:


ut00-s8 pc # sudo -u backuppc /var/tmp/BackupPC_fixLinks
Subroutine jlink redefined at /var/tmp/BackupPC_fixLinks line 597.
Subroutine junlink redefined at /var/tmp/BackupPC_fixLinks line 603.
Use of uninitialized value in numeric eq (==) at /var/tmp/BackupPC_fixLinks 
line 99.


The first two seem deliberate, but are surprising.

Oh, hey, a request: can you add $|=1; to your scripts?  I end up
adding it regularily because I want to save the output but I also
want to see that it's doing something, so I do things like:

  $ sudo -u backuppc /var/tmp/BackupPC_fixLinks | tee /tmp/fix.out

which appears to do nothing for ages due to buffering.

I have a super-giant run going now; I'll let you know how it goes.
It will likely take many days.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-09 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2010-12-09 17:08:05 -0600 [Re: [BackupPC-users] Bizarre 
form of cpool corruption.]:
 On 12/9/2010 4:44 PM, Jeffrey J. Kosowsky wrote:
 
  If I recall correctly, the first time you would do a
  subsequent incremental then it should all get linked back to the pool
  since they are linked not copied to the pool *unless* the file is
  already in the pool in which case the new backup would be linked and
  the old ones would be left orphaned. Similarly, I imagine that new
  fulls would leave them stranded. Either case could explain.
 
 I thought that was a difference between rsync/others.  Rsync works 
 against a previous copy making direct links to anything that already 
 exists so the pool copies are only for new data.  Other methods copy the 
 whole file content over and don't bother looking at any earlier runs, 
 just doing the hash and pool link or copy.

just to clarify:

1. Non-rsync-XferMethods never link to previous backups, only to the pool.
   If new files aren't BackupPC_link-ed into the pool (which should not
   happen, see below), they'll have exactly one hard link and will never
   aquire more.

2. rsync *incrementals* only create entries for *changed* files. These are
   linked to the pool if a matching file exists or otherwise entered into the
   pool as new files (which may fail if BackupPC_link is not or incompletely
   run, which should never happen under normal circumstances, just to be
   clear).
   Thus, rsync *incrementals* will never create new links to orphaned files.

3. rsync *full backups* create entries for *all files*. Changed files are
   treated as with incrementals (i.e. linked to the pool). *Un*changed files
   are linked to the same file in the reference backup. This *should normally*
   be a link to a pool file, making the new entry also be linked to the pool.
   If, however, it is not (and this is the case we were originally talking
   about), the new entry will also not find its way into the pool. This is how
   a multi-link file without pool entry can come into existance.

   I believe, BackupPC *could* in fact detect this case (if the file we're
   about to link to has only one link, we should try to link to the pool
   instead - and possibly also correct the reference file), but I haven't
   checked the source for reasons why this might not work, and I don't expect
   I'll be writing a patch anytime soon :(. Also, I can't estimate if this
   problem is common enough to be worth the effort (of coding and of slowing
   down rsync-Xfer, if only slightly). (*)

   I'm not sure what happens, if the link count of the reference file reaches
   HardLinkMax - I would expect a new entry *in the pool* to be made.

4. rsync will *not* link to anything except the exact same file in the
   reference backup (because it does not notice that there may be an identical
   file elsewhere in the reference backup or anywhere in other backups).

Regards,
Holger

(*) Just to describe how this situation can also occur:
I knowingly introduced it into my pool when I had to start over due to
pool FS corruption and desperately *needed* a reference backup for a large
data set on the other end of a slow link. I copied the last backup from
the corrupted pool FS and ran a full backup to make sure I had intact data.
I was going to fix the problem later or live with the (in my case
harmless) duplication.
BTW, this is an example of tampering with the pool ;-).

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Are all non-zero length files in the pc tree stored in the pool?

2010-12-09 Thread Craig Barratt
Jeffrey writes:

 But I also wanted to confirm that this holds true not just for normal
 files but for links (soft  hard) and other special files.

Yes, all non-empty files should be pooled (including hard links, soft
links, char/block special files and attribute files).

Craig

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/