Sadly I think I understand now.
So by adding the second drive, BTRFS saw it as an extension of data (ala
JBOD-ish?). Even though I thought I was only adding RAID1 for metadata,
was also adding to the data storage.
I assume that even though chunk-recover reports healthy chunks, there's
little to no way to actually get them?
On 10/31/2014 4:35 AM, Robert White wrote:
On 10/30/2014 06:30 AM, Zack Coffey wrote:
Rob, That second drive was immediately put to use elsewhere. I figured
having only the metadata on that drive, it wouldn't matter. The data
stayed single and wasn't part of the second drive, only the metadata
was. I must not be capable of understanding why that wouldn't work.
I thought all I was doing was removing a duplication of metadata and the
worst I would see is a message complaining about a drive missing. Never
thought the data or access to it could be compromised in what seemed to
be a simple situation.
Anand, I get the same output with mount -o recovery,ro.
Your data is gone if your other drive is gone.
Single doesn't mean what you think it means. Single means "one single
copy of your data", but it has _nothing_ to do with "one single
drive". That would mean that after a "btrfs device add" the default
would be to never, ever, use that added drive.
So RAID0 means "striped", so there are chunks, then chunk=0 is on
drive=0 at offset zero. Chunk=1 is on drive=1 at offset zero. (where
there are N drives.) Chunk=N is on drive=N at offset zero. Chunk=N+1
is on drive=0 at offset Chunk_Size+1. And so on.
Concatenation is that drive=N follows drive=N-1 at offset
sum(sizeofeach(all drives less than N)). So Byte=0 is on drive=0 at
offset0; and Byte=(sizeof drive0) is on drive=1 at byte=0.
The RAID standard never addressed bulk concatenation, so there is no
"raid-number" for the one whole drive after another. BTRFS uses
"single", others use other words.
So if you had a 100G drive, and you added a second 100G drive, you'd
have a logically 200G drive, where the first 100G is on drive one, and
the second is on drive two.
You've basically obliterated the second half of the filesystem storage
when you physically removed the drive without semantically removing it
first. Might as well have erased it with a magnet, and all the data
with it. Worse still, if you did any sort of balance or defrag you
likely moved huge numbers of "the _single_ copy of your data" clusters
onto that other device.
So the layout option isn't about limiting storage, that wouldn't make
sense, that's what device add/delete is about. Its about how the data
is laid out across all the drives.
All those unreachable addresses are on that now-defunct drive. No
mount option will ever get you that data back.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html