Hi, John Rouillard wrote on 2011-10-11 20:29:42 +0000 [Re: [BackupPC-users] restore from a read only file system possible?]: > On Tue, Oct 11, 2011 at 09:14:33PM +0100, Tim Fletcher wrote: > > On Tue, 2011-10-11 at 19:57 +0000, John Rouillard wrote: > > > On Tue, Oct 11, 2011 at 01:10:30PM +0200, Frank Wolkwitz wrote: > > > > After raid controller failure the pool file system is running in read > > > > only mode. > > > > Making a file system check would take several days (ext3 fs, 13TB of 16 > > > > TB used) and success is not garanteed. > > > > > > > > So the question is: Is it possible to run backuppc in a read only > > > > environment, not to make backups, but to restore files?
I've never tried it, but I doubt it. For starters, BackupPC won't be able to create the test hardlink. If you're running an older version which doesn't try that yet, it will fail to open the log file (and the server socket?). trashClean probably won't mind as long as trash/ is empty and might be a slight nuisance if not. I agree with the point that has been made: just run BackupPC_tarCreate without attempting to start the daemon. You simply don't need the daemon - it's there for scheduling backups. If you just want to download files via the web interface, that could even theoretically work (i.e. I don't think the daemon is involved in retrieving the files), except that I think the web interface will refuse to work if the daemon is not running (but I'm not sure, just test it). The command line gives you most control and the best diagnostics on errors, so that's what I'd recommend in a case like this. > > > Well maybe but why would you. If the filesystem is inconsistent, how > > > do you know that the file you are restoring points to the proper data? Is the filesystem inconsistent? I agree that you should be suspicious about the integrity of the data, but you may have no better option than to try. Also, BackupPC tries to avoid pool writes when possible and only changes old files to add checksum data (and that only for rsync with checksum caching enabled), so I'd argue that with a reasonable file system (i.e. not reiserfs ;-|) you have good enough chances of success to warrant trying. What was happening when the raid controller failed? Was it in the middle of a backup, a link, a nightly run? > > That's where checksums come in They would if they did :-). > Well if the pc/system/share/some/random/file is a hard link to the > wrong file because the filesystem metadata is screwed up how would you > detect it? The checksums are in the files right, so the data coule be > correct but it's data for the wrong file. Hmm, how would a file system check detect and fix this? If the attrib files contained the full file md5sums (as Jeffrey has probably suggested), it would be *possible* for BackupPC to check, but even so, your pc/ trees could be seriously messed up, so you'd get the correct content restored to a random directory layout. The only promising solution is ZFS, presuming it performs as well as the specification sounds. > Also are the checksums verified during restore? I'm pretty sure they aren't. The only thing I'd think might be noticed would be decompression errors. Don't ask me how error reporting would be supposed to work, though. I was just considering suggesting (or implementing) an option to verify checksums on restore (hmm, difficult - the old story: we don't know the pool file name), but what then? Abort on error? Skip the file on error? Rename the file on error? That all sounds wrong for *some* use cases. Besides that, in another thread we're just wondering about incorrect pool file names. What should we trust more, the contents or the checksums? > I though the checksum verification was done during backups. Yes, and this also doesn't (and can't) fix detected errors, it just prevents linking to erraneous content, I believe. > Also we don't know that he is using the rsync backup method which I think is > the only one that does checksums (and only if checksum caching is enabled) > right? Right. Regards, Holger ------------------------------------------------------------------------------ All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity and more. Splunk takes this data and makes sense of it. Business sense. IT sense. Common sense. http://p.sf.net/sfu/splunk-d2d-oct _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/