Roman Mamedov posted on Sat, 12 Mar 2016 20:48:47 +0500 as excerpted:

> I wonder what's the best way to proceed here. Maybe try btrfs-zero-log?
> But the difference between transid numbers of 6 thousands is concerning.

btrfs-zero-log is a very specific tool designed to fix a very specific 
problem, and transid differences >1 are not it.

I read your followup, posting btrfs check output and wondering about 
enabling --repair, as well.

As long as you have a backup, shouldn't be a problem, even if it does 
cause further damage (which it doesn't appear like it will in your case).

If you don't have a backup it shouldn't be a problem either, since the 
very fact that you don't have a backup, indicates by your actions that 
you consider the data at risk as of less value than the time, effort and 
resources necessary to have that backup in the first place.   As such, 
even if you lose the data, you saved what was obviously more important 
than that data to you, the time, effort and resources that you would have 
otherwise put into making and testing that backup, so you're still coming 
out ahead. =:^)

Which means the only case not clearly covered is that of data worth 
having backed up, which you do, but the backup is somewhat stale, and as 
long as the risk was theoretical, you didn't consider the chance of 
something happening to the data updated since the backup worth more than 
the cost of updating that backup.  But now that the theoretical chance 
has become reality, while loss of that incremental data isn't earth 
shattering in its consequences, you'd prefer not to lose it if you can 
save it without too much trouble.  That's quite understandable, and is 
the exact position I've been in myself a couple times.

In both my cases where I did end up actually giving up on repair and 
eventually blowing away the filesystem, btrfs restore (before that blow-
away) was able to get me back the incremental changes since my last 
proper backup.  If it hadn't worked I'd have certainly lost some work and 
been less than absolutely happy, but as I _did_ have backups (which by 
the fact that I had them indicated I actually valued the data at risk at 
something above trivial), they were simply somewhat stale, it wouldn't 
have been the end of the world.


Of course in your case you _can_ mount, if only in read-only mode.  So 
take the opportunity you've been handed and update your backups (and of 
course backups that haven't been verified readable/restorable aren't yet 
completed backups, as a would-be backup isn't complete and can't really 
be considered a backup yet, until that verification is done), just in 
case, and then even in the worst-case scenario, btrfs check --repair 
can't do more than inconvenience you a bit if it makes the problem worse 
instead of fixing it, since you have current backups and will only need 
to blow away the filesystem and recreate it fresh, in ordered to restore 
them.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to