2017-09-04 21:32 GMT+03:00 Stefan Priebe - Profihost AG <s.pri...@profihost.ag>:
>> May be you can make work your raid setup faster by:
>> 1. Use Single Profile
>
> I'm already using the raid0 profile - see below:

If i understand correctly, you have a very big data set with random RW
access, so:
I'm saying about single profile for compact writes to one device, that
can make WB cache more effective
Because writes will not spread on several devices and as result that
increase chance that full stripe will be overwriten
That's will just work as raid0 with very big stripe size.

> Data,RAID0: Size:22.57TiB, Used:21.08TiB
> Metadata,RAID0: Size:90.00GiB, Used:82.28GiB
> System,RAID0: Size:64.00MiB, Used:1.53MiB
>
>> 2. Use different stripe size for HW RAID5:
>>     i think 16kb will be optimal with 5 devices per raid group
>>     That will give you 64kb data stripe and 16kb parity
>>     Btrfs raid0 use 64kb as stripe so that can make data access
>> unaligned (or use single profile for btrfs)
>
> That sounds like an interesting idea except for the unaligned writes.
> Will need to test this.

Afaik btrfs also use 64kb for metadata:
https://github.com/torvalds/linux/blob/e26f1bea3b833fb2c16fb5f0a949da1efa219de3/fs/btrfs/extent-tree.c#L6678

>> 3. Use btrfs ssd_spread to decrease RMW cycles.
> Can you explain this?

Long description:
https://www.spinics.net/lists/linux-btrfs/msg67515.html

Short:
that option will change allocator logic.
Allocator will spread writes more aggressively and always try write to
new/empty area.
So in theory that will write new data to new empty chunk, so if you
have much free space that will make some guaranty to not touch old
data, so not do RWM and in theory always do full stripe write.

But if you expect that you array will be near full and you don't want
do defragment on that, that can easy get you enospace error.

> Stefan

That just my IMHO,
Thanks.
-- 
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to