Hasan,

Historically, there have been several bugs related to write grant when 
max_dirty_mb is set to large values (depending on a few other details of system 
setup).

Write grant allows the client to write data in to memory and write it out 
asynchronously.  When write grant is not available to the client, the client is 
forced to do sync writes at small sizes.  The result looks exactly like this, 
write performance drops severely.

Depending on what version you're running, you may not have fixes for these 
bugs.  You could either try a newer Lustre version (you didn't mention what 
you're running) or just use a smaller value of max_dirty_mb.

I am surprised to see you're still seeing a speedup from max_dirty_mb values 
over 1 GiB in size.

Can you describe your system a bit more?  How many OSTs do you have and how 
many stripes are you using?  max_dirty_mb is a per OST value on the client, not 
a global one.

-Patrick
________________________________
From: lustre-discuss <lustre-discuss-boun...@lists.lustre.org> on behalf of 
Hasan Rashid via lustre-discuss <lustre-discuss@lists.lustre.org>
Sent: Friday, March 25, 2022 11:45 AM
To: lustre-discuss@lists.lustre.org <lustre-discuss@lists.lustre.org>
Subject: [lustre-discuss] Write Performance is Abnormal for max_dirty_mb Value 
of 2047

Hi Everyone,

As the manual suggests, the valid value range for max_dirty_mb is the values 
larger than 0 and smaller than the lesser of 2048 MiB or 1/4 of client RAM. In 
my system, the client's RAM is 196 GiB. So, the maximum valid value for 
max_dirty_mb(mdm) is 2047 MiB.

However, when we set the max_dirty_mb value to 2047, we see very low write 
throughput for multiple Filebench workloads that we have tested so far. I am 
providing details for one example of the tested workload below.

Workload Detail: We are doing only random write operation of 1MiB size from one 
process and one thread to a single large file of 5GiB size.

Observed Result: As you can see from the below diagram, as we increase the mdm 
value from 768 to 1792 by an amount of 256 in each step, the write throughput 
has increased gradually. However, for the mdm value of 2047, the result dropped 
very significantly. The observation holds true for all the workloads we tested 
so far.


[https://lh3.googleusercontent.com/iEqpGNZhI9r9jJCLq0rWPvFADJRXkKKKZnyCV_8m3nhiHggNqWU9d_7WTUU0yeb011nxjULF4_iLkI7TIc0qe5el11PJI3i9Jot9KveXUil98A_UEnBojFqAHfK94ve1foQT39m2]

I am unable to figure out why we would have such low performance at the mdm 
value of 2047. Please share any insights you have that would be helpful for me 
to understand the aforementioned scenario.

Best Wishes,
Md Hasanur Rashid
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to