On Jan 22, 2014, at 11:41 AM, G. Michael Carter <mi...@carterfamily.ca> wrote:

> How do I get around this.  The drive /dev/sdf has bad sectors.
> 
> Label: Store_01  uuid: ae612523-63cf-4860-a2cb-83a26d907e43
>    Total devices 5 FS bytes used 7.51TiB
>    devid    1 size 0.00 used 77.00GiB path /dev/sdf

size 0.00 used 77.00 GB? Does this even make sense?

>    devid    3 size 1.82TiB used 1.41TiB path /dev/sdd
>    devid    4 size 2.73TiB used 2.32TiB path /dev/sda
>    devid    5 size 2.73TiB used 2.32TiB path /dev/sdb
>    devid    6 size 1.82TiB used 1.41TiB path /dev/sdc
> 
> Btrfs v3.12
> Data, RAID0: total=1.86TiB, used=1.85TiB
> Data, single: total=5.65TiB, used=5.64TiB
> System, RAID1: total=32.00MiB, used=732.00KiB
> Metadata, RAID1: total=10.00GiB, used=8.69GiB
> 
> btrfs device delete /dev/sdf /mnt/Store
> ERROR: error removing the device '/dev/sdf' - Input/output error

Seems it can only be removed if all of the data on that device are successfully 
migrated to other devices.

> 
> I've tried rebalancing as much of the data off the drive I can.  But
> there's still bits in that 77GB that's good data.
> 
> Is there a way of having btrfs skip around the input/output error and
> then force the drive to remove?

It's a valid question for both raid0 and single data profiles, if there will 
one day be a possibility to tolerate read errors, migrate what can be migrated 
and then permit (bad) device removal. Already a scrub would identify corrupt 
files. An additional feature would be a way to cause corrupted files to be 
easily deleted.

In a case of multiple device raid0, without a regular balance being a 
requirement, I could very easily start with two disks, add two more disks, and 
so on, and end up with a significant amount of completely valid data that 
survives a one disk failure. Clearly the file system itself is OK due to 
metadata raid1.


Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to