On 2018年03月22日 01:13, Liu Bo wrote: > On Tue, Mar 20, 2018 at 7:01 PM, Qu Wenruo <quwenruo.bt...@gmx.com> wrote: >> >> >> On 2018年03月21日 01:44, Mike Stevens wrote: >>> >>>>> 30 devices is really not that much, heck you get 90 disks top load JBOD >>>>> storage chassis these days and BTRFS does sound like an attractive choice >>>>> for things like that. >>> >>>> So Mike's case is, that both metadata and data are configured as >>>> raid6, and the operations, balance and scrub, that he tried, need to >>>> set the existing block group as readonly (in order to avoid any >>>> further changes being applied during operations are running), then we >>>> got into the place where another system chunk is needed. >>> >>>> However, I think it'd be better to have some warnings about this when >>>> doing a) mkfs.btrfs -mraid6, b) btrfs device add. >>> >>>> David, any idea? >>> >>> I'll certainly vote for a warning, I would have set this up differently had >>> I been aware. >>> >>> My filesystem check seems to have returned successfully: >>> >>> [root@auswscs9903] ~ # btrfs check --readonly /dev/sdb >>> Checking filesystem on /dev/sdb >>> UUID: 77afc2bb-f7a8-4ce9-9047-c031f7571150 >>> checking extents >>> checking free space cache >>> checking fs roots >>> checking csums >>> checking root refs >>> found 97926270238720 bytes used err is 0 >>> total csum bytes: 95395030288 >>> total tree bytes: 201223503872 >>> total fs tree bytes: 84484636672 >>> total extent tree bytes: 7195869184 >>> btree space waste bytes: 29627784154 >>> file data blocks allocated: 97756261568512 >>> >>> I've remounted the filesystem and I can at least touch a file. I'm >>> restarting the rsync that was running when it originally went read only. >>> What is the next step if it drops back to r/o? >> >> As already mentioned, if you're using tons of disks and RAID0/10/5/6 as >> metadata profile, you can just convert your metadata (or just system) to >> RAID1/DUP. >> >> Then there will be more than enough space for system chunk array. >> > > It's chicken & egg, balance seems to be the only way to switch raid > profiles however users are stuck here because balance is aborted due > to failing to allocate an extra system chunk.
Skip_balance to abort current balance and do the new convert. Since convert will allocate new chunk in new profile, raid1 sys chunk should be able to fit into superblock. Thanks, Qu > > thanks, > liubo > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >
signature.asc
Description: OpenPGP digital signature