Freddie, Yes, correct - those writes are more than large enough. Default cutover is 2 MiB for writes, 4 MiB for reads. It should 'just work'. The only version which is 'new enough' is 2.17, though - Prior to that, in 2.16 you have to set the O_DIRECT flag but can do it unaligned, then in 2.15, you have to actually set up to do aligned O_DIRECT. Gradual progress.
If you do use 2.17, you can do lctl get_param llite.*.*hybrid* And see some stats and other settings. Peter, Correct - 2.17 clients should be able to do hybrid IO with 2.15 servers (not earlier). Regards, Patrick ________________________________ From: Freddie Witherden <[email protected]> Sent: Saturday, February 7, 2026 2:31 PM To: Patrick Farrell <[email protected]> Subject: Re: [lustre-discuss] Group Lock Semantics Hi Patrick, Thank you very much for this. On 07/02/2026 12:11, Patrick Farrell wrote: > Hybrid IO is a new feature in Lustre 2.17 which automatically switches > to direct IO for IO above a certain configurable size - it relies on > another new Lustre trick, which is the ability to do /unaligned/ direct > IO. In fact, unaligned direct IO support is in Lustre 2.16, so if you > have that or newer (Or EXA6 from DDN), you could skip the alignment work > you're describing. (Much of the purpose of this work - unaligned direct > IO and hybrid - is to make the benefits of direct IO easy to access.) > > Currently we only switch for IO above a certain size, but we intend to > eventually switch to direct IO for IO when lock contention is detected > as well. We simply haven't found time to do the development there yet. So if I am understanding correctly with a new enough version of Lustre (our users are all over the place, but should be able to get a recent version on a local cluster) we can just have each rank do unaligned buffered I/O without any explicit locking and have it 'just work' from a performance perspective? Our writes are large (100's MiB contiguous). Regards, Freddie.
_______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
