On 2017-10-12 21:42, Kai Hendry wrote:
Thank you Austin & Chris for your replies!
On Fri, 13 Oct 2017, at 01:19 AM, Austin S. Hemmelgarn wrote:
Same here on a pair of 3 year old NUC's. Based on the traces and the
other information, I'd be willing to bet this is probably the root cause
of the issues.
It probably is... since when I remove my new 4TB USB disk from the
front, I am at least able to mount my two 2x2TB in degraded mode and see
my data!
Given this, I think I know exactly what's wrong (although confirmation
from a developer that what I think is going on can actually happen would
be nice). Based on what you're saying, the metadata on the two 2TB
drives says they're the only ones in the array, while the metadata on
the 4TB drive says all three are in the array, but is missing almost all
other data and is out of sync with the 2TB drives.
So I am not quite sure what to do now.
I don't trust USB hubs.
Yeah, I don't trust USB in general for permanently attached storage.
Not just because of power problems like this, but because all kinds of
things can cause random disconnects, which in turn cause issues with any
filesystem (BTRFS just makes it easier to notice them).
On a different NUC I've noticed I can't charge my iPhone anymore!
https://mail-archive.com/linux-usb@vger.kernel.org/msg95231.html So...
is there any end in sight for the "USB power" problem? Does USB-C /
Thunderbolt address this issue? :(
In theory, yes, but I'm not sure if the NUC's that include it properly
support the USB Power Delivery specification (if not, then they can't
safely source more than 500mA, which is too little for a traditional
hard drive).
Try return my new 4TB to Amazon and find an externally powered one
merely the Btrfs signature is wiped from the deleted device(s). So you
could restore that signature and the device would be valid again;
Wonder how would you do that, in order to have a working snapshot that I
can put in cold storage?
In practice, it's too much work to be practical. It requires rebuilding
the metadata tree from scratch, which is pretty much impossible with the
current tools (and even then you likely wouldn't have all the data,
because some may have been overwritten during the device removal).
Nonetheless I hope the btrfs developers can make it possible to remove a
RAID1 drive, to put in cold storage use case, without any pfaffing.
From a practical perspective, you're almost certainly better off
creating a copy for cold storage without involving BTRFS. As an
alternative, I would suggest one of the following approaches:
1. Take a snapshot of the filesystem, and use send/receive to transfer
that to another device which you then remove and store somewhere.
2. Use rsync to copy things to another device which you then remove and
store somewhere.
Both these options are safer, less likely to screw up your existing
filesystem, and produce copies that can safely be connected to the
original system at the same time as the original filesystem.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html