Re: [BackupPC-users] Backup corrupted, impossible to corect it

2024-10-24 Thread Tim Connors
On Mon, 16 Sep 2024, G.W. Haywood wrote:

> > I tried 3 times to wipe and start from empty pool, but this comes
> > back everytime. So there's a bug in BackupPC. It might only be
> > triggered on ZFS (or maybe it's related to the speed on the
> > underlying storage, as this one is really the fastest I ever
> > used). But there's something. For the first time in more than 18
> > years of using BackupPC,
>
> I've been running BackupPC for nearly as long as you have, and I've
> never seen such errors.  But I run ext4.  It would take an awful lot
> of persuasion and years of testing for me to switch to anything else.
> Last time I tried one of the latest and greatest new filesystems, the
> whole thing went belly up in days.  I vowed never to do that again.
>
> AFAICT so far, the best we can say is that the combination of BackuPC
> and ZFS on Linux might be problematic.  I can't say I'm surprised, but
> I can say that we really don't yet know where your problem lies.  So
> far, I only know of two people running BackupPC on ZFS on Linux.  [*]
> Both have posted to Github issue 494.  Unless there's some lurker here
> who's keeping very quiet, in my view as I said above only you and that
> other person are in a position in which they will be able to collect
> evidence to identify BackupPC as the culprit.  Evidence, not anecdote.


Backuppc user of >~ 20 years, moved from ext4 to XFS (lost data) to btrfs
(lost data) to ZFS in ~2010-2012, moved it to a VM inside proxmox talking
to the original raw disks in 2019/20, upgraded to V4 sometime around then.

Have had the "missing pool file" error mentioned last comment in #494 (not
"can't open pool file") whenever a client crashes halfway through a backup
(like my desktop yesterday).  Sometimes I've never found the culprit of
what backup caused these messages, and since they can tally to the
hundreds or thousands and pollute my logfile every day, I then rollback my
backuppc pool a day or a few in ZFS to a point without any of these noisy
backups, and they go away (Although there are 4 residual "missing pool
file" messages in my daily log, where I never identified which backup they
were coming from).

Never had any other identiable corruption on ZFS though.

*However*, ZFSonlinux has had *plenty* of sparse file related bugs to
do with snapshots and similar.  It was getting so bad at one point, I
nearly stopped trusting the filesystem (my mind has been more at ease
since I stopped having the time to follow mailing list discussions).
I don't think backuppc is creating files with holes in them though.

-- 
Tim Connors


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backup corrupted, impossible to corect it

2024-10-24 Thread Tim Connors
On Thu, 19 Sep 2024, Daniel Berteaud via BackupPC-users wrote:

> - Le 18 Sep 24, à 18:39, Paul Leyland paul.leyl...@gmail.com a écrit :
>
> > My earlier report indicating no known problems was for unencrypted ZFS
> > on Linux.
> >
> > Something over five years ago encrypted ZFS was unavailable. Today my
> > threat model considers the probability of someone breaking into my house
> > and stealing the server or disks as acceptably low. I do, of course,
> > have several layered protective mechanisms between you and me.
>
>
> Not sure yet, but it seems the problem is at least easier to trigger (maybe 
> even only happens in this context) when I backup /etc/pve on some proxmox VE 
> servers, with rsync. In this directory is mounted a fuse based FS (pmxcfs 
> which is a tiny corosync based clustered filesystem, with only a few MB of 
> small text files). Just got a lot of errors during backup of 2 Proxmox 
> servers this morning :
>
> First, 38 errors like this :
>
> G bpc_fileOpen: can't open pool file 
> /var/lib/BackupPC//cpool/b6/d8/b6d9603ea5001178b6020466b548b412 (from 
> etc/pve/nodes/imm-pve-cour-1/qemu-server/159.conf, 3, 16)
> rsync_bpc: failed to open 
> "/etc/pve/nodes/imm-pve-cour-1/qemu-server/159.conf", continuing: No such 
> file or directory (2)
>
> Followed by 38 errors like this :
>
> R bpc_fileOpen: can't open pool file 
> /var/lib/BackupPC//cpool/b6/d8/b6d9603ea5001178b6020466b548b412 (from 
> etc/pve/nodes/imm-pve-cour-1/qemu-server/159.conf, 3, 16)
>
>
> (all of the 38 errors refers to files in /etc/pve)

Just piping up because I am also backing up pve hosts (running proxmox
native, not proxmox on debian) from a debian VM (inside one of those
hosts, as it happens).  The pmxcfs fuse filesystem poses no problems for
me - it's a very static filesystem, and it's very quick to backup because
it's so tiny, but if a file were to change from underneath it, it would
behave like any other filesystem as far as rsync is concerned.

What it *doesn't* like is being restored to without using rsync
--in-place, because when you open the file for write, if the file doesn't
make sense, the fuse filesystem will just drop it on the floor.  So rsync
default will write the .tmp file, and try to rename it atomically into the
destination file, and fail because that temporary file has been dropped on
the floor.  But reading the files for backup poses no such problems.


-- 
Tim Connors


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC remembers _everything_

2024-06-21 Thread Tim Connors
On Mon, 6 May 2024, Kirby wrote:

> BackupPC has been covering up my stupid mistakes since 2005. Fortunately, I
> have never done the 'rm -r *' until last week. Good thing was that I was in my
> home directory so the system itself was untouched and I caught myself before
> too much could get deleted.
>
> 'Not a problem!' I thought. I will just restore from last night's backup and
> be on my way. I selecting the missing files and directories, started the
> restore, and went for a walk. When I got back things were in a sorry state. My
> ~/Downloads directory was has filled up my drive included stuff that had been
> deleted 5 years ago.
>
> Am I misunderstanding how fill works? I though it was filled from the last
> backup going back to the last non-filled backup. Instead it looks like it is
> pulling everything it has ever backed up.
>
> I am running BackupPC-4.4.0-1.el8.x86_64.

My guess is you were backing up using rsync, and using the default flags
to rsync that backuppc specifies, which is to not supply --ignore-errors.
Then you were backing up a filesystem that had IO errors
(/home/$user/.cache/doc), so rsync was not sending the file deletions to
backuppc, so backuppc was operating under the assumption that the files
you had deleted in the meantime were all still there.

Unfortunately, the only bug I know asking to include --ignore-errors by
default, has been closed without that fix included:
https://github.com/backuppc/backuppc/issues/87

Also unfortunately, this will affect all your previous backups, so there's
no way of recovering what your filesystem actually looked like lastnight.

-- 
Tim Connors


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] deleting a single file from backuppc v4

2021-12-22 Thread Tim Connors
Hi all,

Anyone got a way to delete a single path from all historical backups?
Doesn't have to be immediate of course - happy for the nightly job to take
out the trash.

It was trivial in backuppc v3 - just go to the encoded path in the cpool
and truncate the file to 0 to take care of all the hardlinks.  But I
suspect here it's a case of finding the mapping of the file from each
backup (that info is not in $machine/XferLOG.4218.z, so I don't know where
else it would be), and then truncate/remove (since no more hardlinks),
while also taking care of legacy v3 files.

-- 
Tim Connors


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] What file system do you use?

2013-12-16 Thread Tim Connors
On Mon, 16 Dec 2013, Timothy J Massey wrote:

> One last thing:  everyone who uses ZFS raves about it.  But seeing as (on
> Linux) you're limited to either FUSE or out-of-tree kernel modules (of
> questionable legality:  ZFS' CDDL license is *not* GPL compatible), it's
> not my first choice for a backup server, either.

I am using it, and it sucks for a backuppc load (in fact, from the mailing
list, it is currently (and has been for a couple of years) terrible on an
rsync style workload - any metadata heavy workload will eventually crash
the machine after a couple of weeks uptime.  Some patches are being tested
right now out of tree that look promising, but I won't be testing them
myself until it hits master 0.6.3.

Problem for me is that it takes about a month to migrate to a new
filesystem.  I migrated to zfs a couple of years ago with insufficient
testing.  I should have kept on ext4+mdadm (XFS was terrible too - no
faster than ext4, and given that I've always lost data on various systems
with it because it's such a flaky filesystem, I wasn't gaining anything).
mdadm is more flexible than ZFS, although harder to configure.  With
mdadm+ext4, you can choose any disk arrangement you like without being
limited to simple RAID-Z(n) arrangements of equal sized disks.  That said,
I do prefer ZFS's scrubbing compared to mdadm's, but only slightly.  If I
was starting from scratch and didn't have 4-5 years of backup archives,
I'd tell backuppc to turn off compression and munging of the pool, and let
ZFS do it.

I used JFS 10 years ago, and "niche buggy product" would be my description
for it.  Basically, go with the well tested popular FSs, because they're
not as bad as everyone makes them out to be.

-- 
Tim Connors

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC as a personal backup solution?

2013-06-25 Thread Tim Connors
On Sun, 23 Jun 2013, Daniel Carrera wrote:

> Hi all,
>
> I'd like to ask an opinion question: Do you think BackupPC is a
> sensible backup solution for personal, non-enterprise,
> only-one-computer use?

Of course.  Quite easy to set up, but the others have already given you
the caveat to make sure there's some redundancy.

> I am an astronomer.

Heh.  I once lost a CPU-year of simulation data when I did an rm -rf in
the source rather than destination tree when I was trying to rsync the run
across from the supercomputer.  The supercomputer were running backups (an
Enterprisey one, where Enterprise==money and !=quality, as usual) that
silently failed to backup a path of ridiculous length[1].  Rest assured
that backuppc doesn't suffer from this problem :)

> I produce a fair amount of data that I want backed
> up and I frequently rename directories that contain 60 - 300 GB of
> data. Obviously, I don't want all of that to be re-transmitted and
> re-copied just because I moved a directory.

After a move, it will still be read and copied across the network (how
else do you compare that the file is going to be the same?  git still
reads the file before comparing with the pool too), but the pool collision
will be detected, and linked to the new location.  Sounds like backuppc
4.0.0alpha0 (just released - I suggest you don't try it just yet :) will
not need the network transfer even after a move, which will be extremely
nifty when backing up your mum's computer from across the country - just
get her to send a USB stick of all her photos, copy them into a temporary
location, back them up, then back up her computer.

But my own 300GB thesis directory only takes about 10 hours to backup onto
a 3 (SATA-II) disk ZFS pool on a small NAS box with a 4 core 64 bit atom
CPU.  It gets done in the background, so I don't notice.

> I'm not crazy about using Git for backups, but I suppose I could.
> BackupPC sounds great, but I realize that it is an enterprise solution
> that expects to run on its own separate server, probably have a local
> disk for backups, and so on. I suppose I could run Apache on my
> workstation and run BackupPC on top. I hope to get access to a file
> server next week. I don't know if it will be an NFS mount or an SSH
> login. I suppose that an NFS mount would work best for BackupPC.

I've done backuppc over NFS before, but it will be slower.  Turn on async,
turn off atimes (turn off atimes regardless, for you backuppc partition.
You don't need atimes on /var/lib/backuppc regardless of whether you need
them anywhere else)


[1] e.g.
/home/tconnors/thesis/data/tconnors/nfs_cluster_cosmic_tconnors/magellanic/papergrid/lnlam1/2.5-0.0Gyr/lmcmass2.0e10Mo/smcmass3.0e9Mo-3.0e9Mo/rad7.0d,7.0hkpc/theta45/phi210/rot0/1.5d:1.5h/0.6vd/0.2t/2.0l/part20d,20h/high-timeres/galout/output/data/pview_columns/g02.processed.followsmc.columns-xproj:yproj:zproj:xscreen:yscreen:vlsr:vgsr:vsub:rp:th_lmc:ph_lmc:x_smc:y_smc:z_smc:x_mw:y_mw:z_mw:x_sun:y_sun:z_sun.pview.dat


-- 
Tim Connors

--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC as a personal backup solution?

2013-06-25 Thread Tim Connors
On Mon, 24 Jun 2013, Tyler J. Wagner wrote:

> On 2013-06-23 14:13, Daniel Carrera wrote:
> > I'd like to ask an opinion question: Do you think BackupPC is a
> > sensible backup solution for personal, non-enterprise,
> > only-one-computer use?
>
> If you plan to run it on your local work station, on your disk, for the
> purposes of restoring overwritten/deleted files, then consider using a COW
> filesystem like ZFS instead.

I can unrecommend ZFS in backuppc.  My own backuppc/ZFS fileserver crashes
about once a week (to the point where I built an automatic watchdog for
it), and every single lockup reported on the zfsonlinux
mailing list is due to a relatively (but not huge - 300G seems to be the
typical size) large rsync.  It just doesn't cope with memory pressure yet,
and an rsync run seems to interact badly with cache.

> BackupPC really is best with its own server. And with the amount of data
> you're quoting, make sure you're using gigabit Ethernet at minimum.

Nah, I was running on 100M for a long time quite successfully.  The
bottleneck is disk on the server anyway.

-- 
Tim Connors

--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Another BackupPC Fuse filesystem

2013-06-04 Thread Tim Connors
ze: 120
unique: 10, opcode: LOOKUP (1), nodeid: 2, insize: 44, pid: 4959
LOOKUP /dirachome/892
getattr /dirachome/892
attr -> ARRAY(0x2ac40f0), 
0,1,16877,3,130,130,0,1024,1333206059,1333206059,1333206059,512,2
Invalid data type passed at /usr/lib/perl5/Fuse.pm line 130.
 at /home/tconnors/bin/backuppcfs.pl line 58
main::__ANON__('Invalid data type passed at /usr/lib/perl5/Fuse.pm line 
130.\x{a}') called at /usr/lib/perl5/Fuse.pm line 130
Fuse::main('mountopts', 'ro,default_permissions,allow_other', 
'mountpoint', '/snapshots', 'threaded', 0, 'debug', 1, 'open', ...) called at 
/home/tconnors/bin/backuppcfs.pl line 726



And then I go beyond my depth :(


-- 
Tim Connors

--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer high-level views of enterprise services
3. A single system of record for all IT processes
http://p.sf.net/sfu/servicenow-d2d-j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Put pool on an nfs mounted Solaris zfs share

2011-11-20 Thread Tim Connors
On Thu, 17 Nov 2011, Harry Putnam wrote:

> On debian many of the things that would be done by user during an
> install from sources are done for you.  I ended up with the main files
> at /var/lib/backuppc.  which contains a whole pile of some kind of data
> files.  I see them in places like cpool/0/0/0.
>
>   pwd
>  /var/lib/backuppc
>
>   ls
>  cpool  log  pc  pool  trash
>
>   ls cpool/0/0/0
>  00082b8bf118ab8238eab15debddfdd7  000f017d12997dfc67d8e55eab8

Debian's default conf file for demonstration backs up only /etc on
localhost, with the idea that you're meant to change it.  But it works
like that out of the box as soon as you apt-get install backuppc, even if
you haven't configured anything at all yet.

Check perhaps
$Conf{RsyncShareName} in /etc/backuppc/config.pl

'course, best to do this via the web interface, so it picks the right
version of that variable for whatever server you're looking at.

> However, on zfs, it is done transparently and is not really a big
> resource user.
>
> Any thoughts on this subject would be very welcome.

Yeah, if I had a ZFS filesystem (or btrfs), I would do much the same
(having not tried it yet, I don't know that I'd *succeed*).  backuppc is
really quite slow (3MB/s on average on my machines) at backing up a
machine or reconstrucing a given path from a tall tree of incrementals.
Not needing to do incrementals (see the patches on this list for rsync
usage) might be a big win.  I'm sure ZFS is a little quicker than that
given that it's not done in perl.

-- 
Tim Connors

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fairly large backuppc pool (4TB) moved with backuppc_tarpccopy

2011-09-30 Thread Tim Connors
On Fri, 30 Sep 2011, Mike Dresser wrote:

> On 29/09/11 10:28 PM, Adam Goryachev wrote:
> > Can I assume this is because the new HDD's perform better than the old?
> > In other words, would it be safe to assume you would get even better
> > performance using RAID10 with the new HDD's than you are getting with RAID6?
>
> Yes, the new drives are several generations newer.  If i went with
> Raid10 again, I would need 8 drives instead of 6.

With Raid10, as you say, you need more drives than Raid 6.  That increases
the likelihood of having multiple drive failures.

Slightly different RPMs from the different generations of disks in your
Raid10 as someone else suggested, might introduce resonances in your
enclosure that would cause more vibration than identical drives.


It's all too tricky.  I'm going back to clay tablets.

-- 
Tim Connors

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fairly large backuppc pool (4TB) moved with backuppc_tarpccopy

2011-09-29 Thread Tim Connors
On Fri, 30 Sep 2011, Adam Goryachev wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 30/09/11 04:11, Mike Dresser wrote:
> > Just finishing up moving one of my backuppc servers to new larger disks,
> > and figured I'd submit a success story with backuppc_tarpccopy... I
> > wanted to create a new xfs filesystem rather than my usual dd and
> > xfsgrowfs, as this thing has been in use since backuppc 2.1 or similar.
> > Old disks were 10 x 1TB in raid10, new is 6 x 3TB's in raid6 (which in
> > itself has been upgraded many times).. the new raid6 is FAR faster in
> > both iops and STR than the old one.
>
> Hi,
>
> I'm curious that you say you are getting better performance from the new
> RAID6 compared to the old RAID10.
>
> Can I assume this is because the new HDD's perform better than the old?
> In other words, would it be safe to assume you would get even better
> performance using RAID10 with the new HDD's than you are getting with RAID6?
>
> I'm just curious, I'm fairly confident that RAID10 always performs
> better than RAID5 or RAID6. Just want to make sure I'm not making a mess
> of things by using RAID10 over RAID6.
>
> Can anyone suggest any advantage of RAID6 over RAID10 (aside from the
> obvious additional storage capacity/reduced wastage of space)?

Worst case, if you lose one disk, then rebuild, and during rebuild,
suffer the likely consequence of losing another disk when rebuilding
raid6, you still have a valid array.

Worse case, fairly likely occurence with raid10, lose that second disk and
lose all your data.

Care for your data ==> don't use raid10.

-- 
Tim Connors

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Search for File

2011-09-28 Thread Tim Connors
On Wed, 28 Sep 2011, Timothy J Massey wrote:

> Arnold Krille  wrote on 09/28/2011 11:20:57 AM:
>
> > > I'm sure someone with more shell-fu will give you a much better
> command
> > > line (and I look forward to learning something!).
> >
> > Here you are:
> >
> > find  -iname 
...
> > Using find you will realize that its rather slow and has your disk
> rattling
> > away. Better to use the indexing services, for example locate:
> >
> > locate 
>
> Yeah, that's great if you update the locate database (as you mention).  On
> a backup server, with millions of files and lots of work to do pretty much
> around the clock?  That's one of the first things I disable!  So no
> locate.

Hmmm.

When I want to search for a file (half the time I don't even know what
machine or from what time period, so I have to search the entire pool), I
look at the mounted backuppcfs fuse filesystem (I mount onto /snapshots):
https://svn.ulyssis.org/repos/sipa/backuppc-fuse/backuppcfs.pl

What if you let mlocate index into the /snapshots ?

I haven't tested to get it to index /snapshots, but mlocate doesn't index
into directories that haven't had a modified mtime.  If backuppfs
correctly preserves mtimes for directories, then updatedb.mlocate will do
the right thing and be a lot quicker than regular old updatedb.  Then make
sure that cron runs it at a time appropriate for you (when I was doing
night shift, this *wasn't* at 4am!), and you won't even notice that it's
busy.

Then wrap locate up in a simple cgi script to present to your users
instead of training them how to use locate on the commandline.

-- 
Tim Connors

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore via rsync gets stuck

2010-04-22 Thread Tim Connors
On Thu, 22 Apr 2010, Les Mikesell wrote:

> On 4/22/2010 9:37 AM, Tim Connors wrote:
> >
> > I've seen this problem too, and it's not filesystem issues.
> >
> > It's either rsync or the perl library (and I very strongly suspect the
> > latter) that is deadlocking.  Trace it with strace - it's always the same
> > place (er, sorry, can't remember offhand!).  I'm restoring with rsync
> > version 3.0.6 and 3.0.7.
>
> What OS/version and are the perl modules all from the base distribution
> or do you have CPAN or 3rd party packages installed?  I haven't seen
> this problem in backuppc myself, but I have run across some problems
> with Compress::Zlib in a different program that some postings indicated
> were caused by a wrong version of Scalar::Util on the sytem.

Debian unstable on one machine, and debian stable on another (both have
locked up upon restoring).  The packages are all those from debian itself.

Scalar::Util itself is in perl-base, which in one case is 5.10.1-11

And Compress::Zlib is in libcompress-zlib-perl, version 2.024-1

-- 
TimC
The path to enlightenment_0.16.5-6 is through apt-get

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore via rsync gets stuck

2010-04-22 Thread Tim Connors
On Wed, 21 Apr 2010, Les Mikesell wrote:

> On 4/21/2010 7:47 AM, Les Mikesell wrote:
> > Philippe Bruhat (BooK) wrote:
> >> On Wed, Apr 21, 2010 at 08:45:29AM +0200, Philippe Bruhat (BooK) wrote:
> >>> On Wed, Apr 21, 2010 at 08:35:03AM +0200, Philippe Bruhat (BooK) wrote:
>  I understand this as rsync currently restoring the file above. Using ls 
>  -l
>  to see the progress of the restore, I get this:
> 
>  -rw-r--r-- 1 esouche esouche 107368448 2010-04-05 22:47 places.sqlite
>  -rw--- 1 rootroot107216896 2010-04-21 00:00 
>  .places.sqlite.UL48SN
> >>> I tried restoring only this file, and watched the progress.
> >>> The restore got stuck at the same point:
> >>> -rw-r--r-- 1 esouche esouche 107368448 2010-04-05 22:47 places.sqlite
> >>> -rw--- 1 rootroot107216896 2010-04-21 08:43 
> >>> .places.sqlite.uVfJCE
> >>>
> >>
> >> So I tried to restore that specific file using a zip archive, only to
> >> discover that the old file on the target and the file being restored
> >> by backuppc had the same md5 and sha1 sums.
> >>
> >> Is that a known bug in rsync?
> >
> > No - or at least I haven't heard of it. Could you have filesystem 
> > corruption on
> > either side?
>
>
> One other thought here: was anything else running that could have been
> writing to or locking the file during the restore?

I've seen this problem too, and it's not filesystem issues.

It's either rsync or the perl library (and I very strongly suspect the
latter) that is deadlocking.  Trace it with strace - it's always the same
place (er, sorry, can't remember offhand!).  I'm restoring with rsync
version 3.0.6 and 3.0.7.

I've found that rsync restore to a filesystem that is already populated is
so unreliable I just avoid it now.  I restore to an empty path, then
rsync from that location back to the final destination.

I *once* had a backup lockup in the same fashion (only to get interuppted
with SIGARLM about 26 hours later from memory).  Repeatedly, with
incremental backups, always locking up on the same file (this was an
incremental of about level 3 or so, with 1 and 2 obviously having
succeeded).  I think I had it configured with full backups every 60 days,
incrementals every day with levels 1,2,3,4,5,2,3,4,5,3,4,5,4,5 repeating.

If I removed the problematic partition from the scheme, then readded it a
few days later when it was back to a lower level (and hence doing an
incremental against a previous level that *did* work), it *still* locked
up.

If I deleted back to previous levels, then I still had it lock up on the
same file.  I can't remember whether I had to delete all the way back to a
full, or whether I had to go back several incrementals.  But I had to go
back further than I was expecting given the layout (sorry, too long ago to
remember the details) and what had previously succeeded.  I think I even
tried forcing a full, which still locked up at the same place.  I can't
remember whether I deleted the file from the incremental backup to force
it to pull it in from scratch.

Once I got it all the way through a successful backup, I've never had a
problem with that host again.  The rsync version on that was 2.6.8 on
solaris.  It's a strange enough machine with weird ssh behaviour that I
wasn't overly concerned about that backup locking up, particularly as it
hasn't happened since.

But the restores failing in this way is scary.  And it makes me nervous to
replace our existing backup infrastructure with backuppc if it can't
reliably do a restore!

-- 
TimC
It is amazing how much "mature wisdom" resembles being too tired.
 --Robert A. Heinlein

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] trashClean && backup scheduling

2009-12-22 Thread Tim Connors
On Tue, 22 Dec 2009, Sebastiaan van Erk wrote:

> Question 1:
>
> The first thing I noticed is that trashClean runs every 5 minutes, making it
> unable to spindown my disks. I changed the configuration setting to once every
> 7 days, but I was wondering if this is likely to cause any problems...

I do similar things.  No problems.

> Question 2:
>
> The next thing I'm wondering: what is the best way to make sure backups occur
> as close together as possible, so that the spinup time is minimalized and I
> don't get too many spinup/spindown cycles? I can use blackout periods of
> course, but are there any other measures I can take?

Our backup server has 4 raided mybooks which spin down automatically and
take a lot of time to spin back up.  I have a max of 2 jobs scheduled at a
time, and schedule backuppc to wake up every 6 minutes (spindown time is
about 10 minutes).

$Conf{WakeupSchedule} = [
'8', '8.1', '8.2', '8.3', '8.4', '8.5', '8.6', '8.7', '8.8', '8.9', '9',
'9.1', '9.2', '9.3', '9.4', '9.5', '9.6', '9.7', '9.8', '9.9', '10',
'10.1', '10.2', '10.3', '10.4', '10.5', '10.6', '10.7', '10.8', '10.9',
'11'
];

(only for a limited time because jobs at the observatory have to be
finished before about 5pm)

-- 
TimC
Never trust a man who can count to 1,023 on his fingers. --unknown

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unexpected call BackupPC::Xfer::RsyncFileIO->unlink(...)

2009-12-19 Thread Tim Connors
On Sat, 19 Dec 2009, Holger Parplies wrote:

> Hi,
>
> Jeffrey J. Kosowsky wrote on 2009-12-18 15:36:48 -0500 [Re: [BackupPC-users] 
> Unexpected call?BackupPC::Xfer::RsyncFileIO->unlink(...)]:
> > Jeffrey J. Kosowsky wrote at about 13:11:37 -0500 on Monday, November 2, 
> > 2009:
> >  > Unexpected call 
> > BackupPC::Xfer::RsyncFileIO->unlink(cygwin/usr/share/man/man3/addnwstr.3x.gz)
> >  >
> >  > [...]
> >  >
> >  > Note from the below quoted thread from 2005, Craig claims that the error 
> > is
> >  > benign, but doesn't explain how/why.
> >
> > [...]
> >
> > I am curious about what could be causing this situation [...]
>
> if you're curious about what is causing a benign warning message, you're
> probably on your own for the most part. I can supply you with one casual
> observation and one tip:
>
> I think I saw that warning when I changed from tar to rsync XferMethod. As you
> know, tar and rsync encode the file type "plain file" differently in attrib
> files (rsync has a bit for it, tar (like stat()) doesn't and simply takes the
> absense of a bit for a special file type to mean "plain file"). When rsync
> compares remote and local file type (remote from remote rsync instance, local
> from attrib file generated by tar XferMethod), it assumes a plain file changed
> its type, so it removes the local copy [...]

I get it whenever any file gets replaced by a symlink courtesy of some
debian update (or vice versa - when a symlink gets replaced by a real
file?).

-- 
TimC
Beware of bugs in the above program. I proved it correct,
I did not try it.   --- D. E. Knuth

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Matching files against the pool remotely.

2009-12-18 Thread Tim Connors
On Fri, 18 Dec 2009, Malik Recoing. wrote:

> The Holy Doc says ( Barratt:Desing:operation:2 ): "it checks each file in the
> backup to see if it is identical to an existing file from any previous backup 
> of
> any PC. It does this without needed to write the file to disk."
>
> But it doesn't say "without the need to upload the file in memory".
>
> I know a file will be skiped if it is present in the previous backup, but what
> appens if the file have been backed up for another host ?

It is required to be uploaded first as otherwise there's nothing to
compare it to (yeah, I know, that's a pain[1]).

It might theoretically be sufficient to let the remote side calculate a
hash and compare it against the files in the pool with matching hashes,
and then let rsync do full compares against all the matching hashes in the
pool (since hash collisions happen), but I don't believe anyone has tried
to code this up yet, and it would only be of limited uses in systems that
were network bandwidth constrained rather than disk bandwidth constrained.

[1] I just worked around this myself by copying a large set of files onto
sneakernet (my USB key), copying them onto a directory on the local backup
server, backing that directory up, then moving the corresponding directory
in the backup tree into the previous backup of the remote system, so it
will be picked up and compared against the same files when that remote
system is next backed up.  I find out tomorrow whether that actually
worked :)


-- 
TimC
Computer screens simply ooze buckets of yang.
To balance this, place some women around the corners of the room.
-- Kaz Cooke, Dumb Feng Shui

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] RsyncP: BackupPC_dump consumes 100% CPU and progress stalls

2009-12-13 Thread Tim Connors
backuppc has spontaneously a few days ago started getting stuck on a
partition on a machine that it has been dealing with fine for 120 days
now.

The logfile gives:

2009-12-02 08:46:04 incr backup 122 complete, 38 files, 46425552 bytes, 0 
xferErrs (0 bad files, 0 bad shares, 0 other)
2009-12-03 08:24:08 incr backup started back to 2009-11-29 08:18:01 (backup 
#119) for directory /
2009-12-03 08:27:03 incr backup started back to 2009-11-29 08:18:01 (backup 
#119) for directory /var
2009-12-03 08:28:12 incr backup started back to 2009-11-29 08:18:01 (backup 
#119) for directory /ssb_local
2009-12-03 08:31:26 incr backup started back to 2009-11-29 08:18:01 (backup 
#119) for directory /opt
2009-12-03 08:31:47 incr backup started back to 2009-11-29 08:18:01 (backup 
#119) for directory /scratch
2009-12-03 08:31:57 incr backup started back to 2009-11-29 08:18:01 (backup 
#119) for directory /6dfdata
2009-12-03 08:32:08 incr backup started back to 2009-11-29 08:18:01 (backup 
#119) for directory /6dfgs
2009-12-03 08:33:30 incr backup started back to 2009-11-29 08:18:01 (backup 
#119) for directory /6dfdata2
2009-12-03 08:44:02 incr backup started back to 2009-11-29 08:18:01 (backup 
#119) for directory /usr/local
2009-12-03 08:44:58 incr backup 123 complete, 46 files, 129608707 bytes, 0 
xferErrs (0 bad files, 0 bad shares, 0 other)
2009-12-04 08:18:49 incr backup started back to 2009-12-03 08:24:08 (backup 
#123) for directory /
2009-12-04 08:23:45 incr backup started back to 2009-12-03 08:24:08 (backup 
#123) for directory /var
2009-12-04 08:25:01 incr backup started back to 2009-12-03 08:24:08 (backup 
#123) for directory /ssb_local
2009-12-04 08:28:09 incr backup started back to 2009-12-03 08:24:08 (backup 
#123) for directory /opt
2009-12-04 08:28:32 incr backup started back to 2009-12-03 08:24:08 (backup 
#123) for directory /scratch
2009-12-04 08:28:41 incr backup started back to 2009-12-03 08:24:08 (backup 
#123) for directory /6dfdata
2009-12-04 08:28:50 incr backup started back to 2009-12-03 08:24:08 (backup 
#123) for directory /6dfgs
2009-12-04 08:29:21 incr backup started back to 2009-12-03 08:24:08 (backup 
#123) for directory /6dfdata2
2009-12-04 08:40:29 incr backup started back to 2009-12-03 08:24:08 (backup 
#123) for directory /usr/local
2009-12-04 08:41:25 incr backup 124 complete, 37 files, 47108022 bytes, 0 
xferErrs (0 bad files, 0 bad shares, 0 other)
2009-12-05 08:18:02 incr backup started back to 2009-12-04 08:18:49 (backup 
#124) for directory /
2009-12-05 08:21:32 incr backup started back to 2009-12-04 08:18:49 (backup 
#124) for directory /var
2009-12-05 08:23:00 incr backup started back to 2009-12-04 08:18:49 (backup 
#124) for directory /ssb_local
2009-12-05 08:26:45 incr backup started back to 2009-12-04 08:18:49 (backup 
#124) for directory /opt
2009-12-06 04:27:03 Aborting backup up after signal ALRM
2009-12-06 04:27:05 Got fatal error during xfer (aborted by signal=ALRM)
2009-12-06 10:03:04 incr backup started back to 2009-12-04 08:18:49 (backup 
#124) for directory /
2009-12-06 10:06:13 incr backup started back to 2009-12-04 08:18:49 (backup 
#124) for directory /var
2009-12-06 10:07:57 incr backup started back to 2009-12-04 08:18:49 (backup 
#124) for directory /ssb_local
2009-12-06 10:11:40 incr backup started back to 2009-12-04 08:18:49 (backup 
#124) for directory /opt
2009-12-07 06:12:02 Aborting backup up after signal ALRM
2009-12-07 06:12:04 Got fatal error during xfer (aborted by signal=ALRM)
2009-12-07 08:21:50 incr backup started back to 2009-12-04 08:18:49 (backup 
#124) for directory /

It's a slowaris machine running rsync 2.6.8, and the backuppc server is
debian lenny, running backuppc 3.1.0.

A typical gdb trace on the runaway backuppc_dump process looks like this:

sudo gdb /usr/share/backuppc/bin/BackupPC_dump 20538
bt
#0  0x7f779bdd24c0 in perl_gthr_key_...@plt () from 
/usr/lib/perl5/auto/File/RsyncP/FileList/FileList.so
#1  0x7f779bdd67f2 in XS_File__RsyncP__FileList_get () from 
/usr/lib/perl5/auto/File/RsyncP/FileList/FileList.so
#2  0x7f779e526eb0 in Perl_pp_entersub () from /usr/lib/libperl.so.5.10
#3  0x7f779e525392 in Perl_runops_standard () from /usr/lib/libperl.so.5.10
#4  0x7f779e5205df in perl_run () from /usr/lib/libperl.so.5.10
#5  0x00400d0c in main ()

(always in RsyncP, almost obviously).

It has no files open according to /proc//fd, other than logfiles etc.

It's cpu bound, making no syscalls, so attaching an strace process to it
yields nothing (and I wasn't enlightened when I tried to attach it from
the start).

If I remove the /opt partition one day, then add it the next day, it locks
up again that next day while trying to do /opt.

If I delete 123, 124 (and the newer copies without /opt) and modify the
backups file to make 122 the last one, and thus force it to compare
against backup 119 rather than 123, it still locks up on /opt
(incrlevels is 1, 2, 3, 4, 5, 2, 3, 4, 5, 3, 4, 5, 4, 5)

-- 
TimC
"How should I

[BackupPC-users] patch to make more robust against corrupted filesystem

2009-07-08 Thread Tim Connors

I know there have been threads on here debating the relative merit of
throwing out all of the data on a backup disk that suffered major
filesystem damage, vs trying to fix it and let backuppc regenerate any
files that might have corrupted data (via setting a
RsyncCsumCacheVerifyProb of 1 and redoing all machines as full backups).

In light of just how much data I would be throwing out including several
machines that have been archived and then decomissioned, I opted to try to
restore the filesystem and let backuppc keep writing to it.  And it seems
to have worked and is still working for me several months later, so I'm
confident all is well in the world.

The only trouble I had was several attrib files were corrupted, so
unpack() barfed with a die() (can't remember what it died of, and my
logs have long since been rotated away, but I suspect it was that
unpack() dies on invalid memory references rather than segfaulting,
which is awfully kind of it) and didn't let me back up anything (usually
dying about 95% of the way through a 32 hour backup, dagnamit).  Below was
my solution to make it more robust.  Once every machine had done a full
backup, the eval would no longer fail because there was always a valid
attrib file.

This or something like it is probably worth including in upstream, because
cosmic rays can always happen, so it's good to not let unpack() kill the
entire backup when the pool has some slightly corruped data.

--- Attrib.pm.debian-3.1.0-62009-07-09 01:15:43.0 +1000
+++ Attrib.pm.attrib_fix2009-07-09 01:14:59.0 +1000
@@ -252,11 +252,18 @@
$fd->read(\$newData, 65536);
$data .= $newData;
}
-   (
-@{$a->{files}{$filename...@fldsunixw},
-@{$a->{files}{$filename...@fldsunixn},
-$data
-) = unpack("w$nFldsW N$nFldsN a*", $data);
+eval {
+   (
+@{$a->{files}{$filename...@fldsunixw},
+@{$a->{files}{$filename...@fldsunixn},
+$data
+   ) = unpack("w$nFldsW N$nFldsN a*", $data);
+   };
+   if ($@) {
+   $a->{_errStr} = "unpack: Can't read attributes for $fileName from 
$file $@";
+$fd->close;
+   return;
+   }
 if ( $a->{files}{$fileName}{$FldsUnixN[-1]} eq "" ) {
 $a->{_errStr} = "Can't read attributes for $fileName"
   . " from $file";


-- 
TimC
"Any sufficiently complicated C or Fortran program contains an ad hoc
informally-specified bug-ridden slow implementation of half of Common
Lisp."   -- Greenspun's Tenth Rule of Programming

--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] "unexpected repeated share name"

2009-05-04 Thread Tim Connors
On Mon, 4 May 2009, Mark Maciolek wrote:

> hi,
>
> You should leave the RysncShareName as / and add the directories under
> BackupFilesOnly
>
>
> $Conf{BackupFilesOnly} = {
>   '*' => [
> '/home',
> '/etc'
>   ]
> };

No, because I want the whole of the / partition backed up (seperately to 
the home, boot, dos and "volatile" partitions).  rsync has been 
supplied with --one-file-system.

I just realised of course that I was mistaken about the "/" directory 
already appearing in the pool causing the test to be erronously triggered 
(I forgot that "/" would be munged to "f%2f" that wouldn't collide with 
anything already there), so I don't actually know why it is failing.

The /pcbackups/pc/dirac/new directory (so far, halfway through the backup) 
contains:
-rw-r-  1 backuppc backuppc   45 May  4 21:15 attrib
drwxr-x---  4 backuppc backuppc 4096 May  4 21:15 f%2fboot
drwxr-x--- 12 backuppc backuppc 4096 May  4 21:41 f%2fdos
drwxr-x---  5 backuppc backuppc 4096 May  4 21:14 f%2fhome
whereas, if "/" was listed first, you also end up with f%2f, like I did in 
the previous successful backup:
-rw-r-  2 backuppc backuppc  112 Apr 30 20:14 attrib
-rw-r-  1 backuppc backuppc  609 Apr 30 20:24 backupInfo
drwxr-x--- 33 backuppc backuppc 4096 Apr 30 03:47 f%2f
drwxr-x---  4 backuppc backuppc 4096 Apr 30 19:23 f%2fboot
drwxr-x--- 12 backuppc backuppc 4096 Apr 30 19:56 f%2fdos
drwxr-x---  5 backuppc backuppc 4096 Apr 30 19:22 f%2fhome
drwxr-x--- 11 backuppc backuppc 4096 Apr 30 20:14 
f%2fvolatile%2fvar%2fcache%2fapt-cacher-ng


The logfile shows this:
2009-05-03 19:03:00 full backup started for directory /home (baseline backup 
#89)
2009-05-04 21:14:47 unexpected repeated share name / skipped
2009-05-04 21:14:47 full backup started for directory /boot (baseline backup 
#89)
2009-05-04 21:15:10 full backup started for directory /dos (baseline backup #89)


-- 
Tim Connors


--
Register Now & Save for Velocity, the Web Performance & Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance & Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] "unexpected repeated share name"

2009-05-04 Thread Tim Connors
backuppc fails to backup '/' if it is not specified before anything else.
In fact, any subtree can't be specified before its parents, or the parents
won't be backed up because the directory already exists by the time
backuppc performs the test for repeated share name.  If it is a wise idea
to keep the test, then perhaps it can be reworded to "share already exists
- try specifying subtrees after their parents" or the like.

My failing config is here:
$Conf{RsyncShareName} = [
  '/home',
  '/',
  '/boot',
  '/dos',
  '/volatile/var/cache/apt-cacher-ng'
];


-- 
TimC
We are no longer the knights who say "ni"
We are the knights who say "icky icky (Comet) Ikeya-Zhang zoooboing!"
 --Lord Ender on /.

--
Register Now & Save for Velocity, the Web Performance & Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance & Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] missing rdev on device...

2009-01-08 Thread Tim Connors
On Wed, 24 Dec 2008, Holger Parplies wrote:

> Hi,
>
> Glassfox wrote on 22.12.2008 at 16:50:24 [[BackupPC-users]  missing rdev on 
> device...]:
> > I tried a complete restore on my localhost (which also runs BackupPC) today 
> > and got a lot of error messages like this:
> >
> > "2008-12-22 21:54:23 localhost: File::RsyncP::FileList::encode: missing 
> > rdev on device file dev/initctl"
> >
> > Any idea what's wrong here?
>
> this is weird.

Bugger, I'm in need of restoring my filesystem and I just came across
this.  It issued a few of these messages, then seems to have paused, with
rsync or backuppc not showing any activity according to strace.  Dunno
whether the incorrect handling of the pipe caused the susequent pause.
I'm restoring to blank partitions, so it's not like it's blocking on
writing to a pipe on the remote filesystem, since the pipe doesnt exist
yet.  Note that in my debugging, I have blown away the temporary partition
I created, so I can't verify whether it had actually created the pipe yet
(one suspects not, if Rsync.pm couldn't handle it).

> 1.) /dev/initctl is a named pipe. There is no point in *restoring* a named
> pipe, because you can't "restore" the process listening "on the other
> end". You might still want to back up named pipes, for the sake of
> having an exact image of your file system for reference or auditing
> purposes.

Sure you want to restore named pipes (I think you're thinking of sockets).
Named pipes are pipes on the filesystem that haev a static name.  They can
be created temporarily by processes to communicate with another process
(but what's the point, that could have just passed over file descriptors
to a socket), or they can be pipes that you keep around that are
persistent over processes restarting (I first used pipes about 10 years
ago to have a music daemon that could block on a pipe until some reader
came along.  The first pipes backuppc came across when trying to
restore my current system was a pipe I created a few months ago in order
to work around limitations of a script I was using that wouldnt write to
stdout or /dev/stdout, but would write to a named pipe.

/dev/initctl for instance always exists in /dev.  If you want to restore
/dev, then you have to be able to handle named pipes.

Also, rsync is able to read and write named pipes.  It probably is -D, but
in order to restore a filesystem, you need to use -D.

> Did all of the error messages refer to named pipes?

2009-01-09 12:47:11 gamow: File::RsyncP::FileList::encode: missing rdev on 
device file tconnors/.config/gxine/socket
2009-01-09 12:47:11 gamow: File::RsyncP::FileList::encode: missing rdev on 
device file tconnors/.signature
2009-01-09 12:47:11 gamow: File::RsyncP::FileList::encode: missing rdev on 
device file 
tconnors/movies/a2k/handlebar_mounted_SSO_blackburnhill_return/intermediate1.mpg
2009-01-09 12:47:11 gamow: File::RsyncP::FileList::encode: missing rdev on 
device file 
tconnors/movies/a2k/handlebar_mounted_SSO_blackburnhill_return/intermediate2.mpg

They are named pipes on the system that was backed up and is trying to be
restored to.

> 2.) As far as I understand the complaining code (FileList/FileList.xs in the
> source of File::RsyncP), it is interpreting the file as a device, not
> a pipe. Might this be related to the "-D" option syntax change in rsync
> 2.6.7? Has anyone successfully tested backing up and restoring named
> pipes?

Backing up has never complained.  The file exists as an empty file in the
full backup.
Dunno how to parse the attrib file.

in FileList/rsync.h,
#define IS_DEVICE(mode) (S_ISCHR(mode) || S_ISBLK(mode) || S_ISSOCK(mode)
|| S_ISFIFO(mode))

> What rsync commands are run for backup and for restore? You can find them in
> the XferLOG and RestoreLOG files ...

I'm restoring from an incremental backup:
backup:
incr backup started back to 2008-12-25 15:00:01 (backup #14) for
directory /home
Running: /usr/bin/ssh -q -x -l backuppc gamow sudo /usr/bin/rsync
--server --sender --numeric-ids --perms --owner --group -D --links
--hard-links --times --block-size=2048 --recursive
--one-file-system --checksum-seed=32761 . /home/
restore:
Running: /usr/bin/ssh -q -x -l backuppc gamow sudo /usr/bin/rsync
--server --numeric-ids --perms --owner --group -D --links
--hard-links --times --block-size=2048 --relative
--ignore-times --recursive --checksum-seed=32761 . /mnt/


-- 
TimC
MacOSX: Sort of like a pedigree persian cat. Very sleek, very
sexy, but a little too prone to going cross-eyed, biting you on
your thumb and then throwing up on your trousers. -- Jim in ASR

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sour

[BackupPC-users] Changes between two versions

2008-12-23 Thread Tim Connors
Hi,

There is a way of finding annotated history of one directory:

http:///backuppc/index.cgi?action=dirHistory&host=&share=/&dir=/

But I was wondering whether it was currently possible, or whether it was
easy to add support to show the changes of an entire hierarchy (starting
from some nominated path) between two nominated backups (not just the last
and the second last backups) and/or the current filesystem (eg, the last
backup and the current filesystem.  Would naturally be a very slow list to
produce, as it would have to log in and inspect the current filesystem.
I'd willingly give up this aspect, and just require you to perform another
backup to compare against, as long as I could ensure that it wasn't about
to clear out the backup I was just going to compare it against).

I was thinking of a list of all files that have changed, with a link to
each file that shows the unified diff between them (if an ascii file).

It'd be easy to produce a list in the first place, if you knew you were
looking at 2 full backups -- just do a `find -printf '%i %p' | sort -g`
and look for inode/filename that don't match between the
two trees (this wouldn't find metadata changes, which would presumably
require parsing the attrib file.  It would also get confused by pools that
had their hardlinks broken).

-- 
TimC
It is impossible to sharpen a pencil with a blunt ax.  It is equally vain
to try to do it with ten blunt axes instead.   -- Edsger Dijkstra

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc mirroring with rdiff-backup or not?

2008-11-23 Thread Tim Connors

On Tue, 18 Nov 2008, Adam Goryachev wrote:


Christian Völker wrote:

My way to backup the backup ;-) is LVM

The pool is on a LVM as a LV. To backup the pool while backuppc is
running I can take a snapshot of the pool's LV and I rsync this one. So
there are no filesystem issues and backuppc can stay running while the
rsync is running.
Works like a champ!


Except your filesystem is not in a stable state. You should stop
backuppc, umount your filesystem, then take your snapshot, and then
remount filesystem, start backuppc, and finally copy the snapshot
remotely as needed

With your existing method, you should fsck your filesystem after it is
copied to the remote location

BTW, 99% of the time it won't matter, but if it screws up that one time
when you need it, then oops :(


Actually, lvm hooks into most filesystems including ext3 to make sure they 
are self consistent at the point of snapshot.


See http://www.google.com/[EMAIL PROTECTED]

Note that a 'fsck -p' is needed on the snapshot, but it will only 
encounter orphaned inodes if all has gone well (and if all hasn't gone 
well, then there are bigger problems with the real filesystem, and it 
should be rebooted and force fscked as soon as possible, as per the thread 
above).


--
Tim Connors
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc mirroring with rdiff-backup or not?

2008-11-23 Thread Tim Connors
On Mon, 17 Nov 2008, [EMAIL PROTECTED] wrote:

> My biggest worry regarding these outside-of-BackupPC hacks is that when I
> need them, I'm going to find that they're not going to work because it was
> running, say, simultaneous to an actual backup.
>
> Don't get me wrong:  I'll take the hacks.  It's better than nothing.  I,
> like I think *most* of us, would kill (or even pay for!) a method of
> replicating a pool in a guaranteed-correct way, especially at the host or
> even backup level.  But I still worry about using these hacks for
> production.

What if you did an lvm snapshot of the pool before while you knew no 
backups/nightlies etc were running, then rsynced/dded the lvm snapshot?


Speaking of which - I'm still, several months later, writing a script that 
backuppc can use to sync to lvm snapshots.  If only I wasn't so busy 
screwing around doing other things.  Has anyone done similar?

-- 
Tim Connors


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/