> It has now been doing "btrfs device delete missing /mnt" for about 90 hours. > > These 90 hours seem like a rather long time, given that a rebalance/convert > from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago, and a scrub > takes about 7 hours (4-disk-raid1). > > OTOH the filesystem will be rather full with only 3 of 4 disks available, so > I do expect it to take somewhat "longer than usual". > > Would anyone venture a guess as to how long it might take?
It's done now, and took close to 99 hours to rebalance 8.1 TB of data from a 4x6TB raid1 (12 TB capacity) with 1 drive missing onto the remaining 3x6TB raid1 (9 TB capacity). Now I made sure quotas were off, then started a screen to fill the new 8 TB disk with zeros, detached it and and checked iotop to get a rough estimate on how long it will take (I'm aware it will become slower in time). After that I'll add this 8 TB disk to the btrfs raid1 (for yet another rebalance). The next 3 disks will be replaced with "btrfs replace", so only one rebalance each is needed. I assume each "btrfs replace" would do a full rebalance, and thus assign chunks according to the normal strategy of choosing the two drives with the most free space, which in this case would be a chunk to the new drive, and a mirrored chunk to that existing 3 drive with most free space. What I'm wondering is this: If the goal is to replace 4x 6TB drive (raid1) with 4x 8TB drive (still raid1), is there a way to remove one 6 TB drive at a time, recreate its exact contents from the other 3 drives onto a new 8 TB drive, without doing a full rebalance? That is: without writing any substantial amount of data onto the remaining 3 drives. It seems to me that would be a lot more efficient, but it would go against the normal chunk assignment strategy. Cheers, boli -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html