On 10/02/2021 05:12, Michal Rostecki wrote:
On Thu, Feb 04, 2021 at 08:30:01PM +0800, Anand Jain wrote:
Hi Michal,
Did you get any chance to run the evaluation with this patchset?
Thanks, Anand
Hi Anand,
Yes, I tested your policies now. Sorry for late response.
For the singlethreaded test:
[global]
name=btrfs-raid1-seqread
filename=btrfs-raid1-seqread
rw=read
bs=64k
direct=0
numjobs=1
time_based=0
[file1]
size=10G
ioengine=libaio
results are:
- raid1c3 with 3 HDDs:
3 x Segate Barracuda ST2000DM008 (2TB)
* pid policy
READ: bw=215MiB/s (226MB/s), 215MiB/s-215MiB/s (226MB/s-226MB/s),
io=10.0GiB (10.7GB), run=47537-47537msec
* latency policy
READ: bw=219MiB/s (229MB/s), 219MiB/s-219MiB/s (229MB/s-229MB/s),
io=10.0GiB (10.7GB), run=46852-46852msec
* device policy - didn't test it here, I guess it doesn't make sense
to check it on non-mixed arrays ;)
Hum. device policy provided best performance in non-mixed arrays with
fio sequential workload.
raid1c3 Read 500m (time = 60sec)
-----------------------------------------------------
| nvme+ssd nvme+ssd all-nvme all-nvme
| random seq random seq
------------+-----------------------------------------
pid | 973MiB/s 955MiB/s 2144MiB/s 1962MiB/s
latency | 2005MiB/s 1924MiB/s 2083MiB/s 1980MiB/s
device(nvme)| 2021MiB/s 2034MiB/s 1920MiB/s 2132MiB/s
roundrobin | 707MiB/s 701MiB/s 1760MiB/s 1990MiB/s
- raid1c3 with 2 HDDs and 1 SSD:
2 x Segate Barracuda ST2000DM008 (2TB)
1 x Crucial CT256M550SSD1 (256GB)
* pid policy
READ: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s),
io=10.0GiB (10.7GB), run=46749-46749msec
* latency policy
READ: bw=517MiB/s (542MB/s), 517MiB/s-517MiB/s (542MB/s-542MB/s),
io=10.0GiB (10.7GB), run=19823-19823msec
* device policy
READ: bw=517MiB/s (542MB/s), 517MiB/s-517MiB/s (542MB/s-542MB/s),
io=10.0GiB (10.7GB), run=19810-19810msec
For the multithreaded test:
[global]
name=btrfs-raid1-seqread
filename=btrfs-raid1-seqread
rw=read
bs=64k
direct=0
numjobs=1
time_based=0
[file1]
size=10G
ioengine=libaio
results are:
- raid1c3 with 3 HDDs:
3 x Segate Barracuda ST2000DM008 (2TB)
* pid policy
READ: bw=1608MiB/s (1686MB/s), 201MiB/s-201MiB/s (211MB/s-211MB/s),
io=80.0GiB (85.9GB), run=50948-50949msec
* latency policy
READ: bw=1515MiB/s (1588MB/s), 189MiB/s-189MiB/s (199MB/s-199MB/s),
io=80.0GiB (85.9GB), run=54081-54084msec
- raid1c3 with 2 HDDs and 1 SSD:
2 x Segate Barracuda ST2000DM008 (2TB)
1 x Crucial CT256M550SSD1 (256GB)
* pid policy
READ: bw=1843MiB/s (1932MB/s), 230MiB/s-230MiB/s (242MB/s-242MB/s),
io=80.0GiB (85.9GB), run=44449-44450msec
* latency policy
READ: bw=4213MiB/s (4417MB/s), 527MiB/s-527MiB/s (552MB/s-552MB/s),
io=80.0GiB (85.9GB), run=19444-19446msec
* device policy
READ: bw=4196MiB/s (4400MB/s), 525MiB/s-525MiB/s (550MB/s-550MB/s),
io=80.0GiB (85.9GB), run=19522-19522msec
To sum it up - I think that your policies are indeed a very good match
for mixed (nonrot and rot) arrays.
They perform either slightly better or worse (depending on the test)
than pid policy on all-HDD arrays.
Theoretically, latency would perform better, as the latency parameter
works as a feedback loop. Dynamically adjusting itself to the delivered
performance. But there is overhead to calculate the latency.
Thanks, Anand
I've just sent out my proposal of roundrobin policy, which seems to give
better performance for all-HDD than your policies (and better than pid
policy in all cases):
https://patchwork.kernel.org/project/linux-btrfs/patch/20210209203041.21493-7-mroste...@suse.de/
Cheers,
Michal