Re: [lustre-discuss] how does lustre handle node failure

2023-07-21 Thread Shawn via lustre-discuss
Hi Laura, thanks for your reply. It seems the OSSs will share the disks created from a shared SAN. So the OSS-pairs can failover in a pre-defined manner if one node is down, coordinated by a HA manager. This can certainly work on a limited scale. I'm curious if this static schema can scale to a

Re: [lustre-discuss] [EXTERNAL] [BULK] ldiskfs patch rejected Rocky 8.6

2023-07-21 Thread Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.] via lustre-discuss
We have been using ZFS on our LFS for about the last 7 years. Back then, we were using ZFS 0.7 and lustre 2.10 and there was a significant decrease in metadata performance compared to ldiskfs. Most of our workflows at the time didn’t need a lot of metadata performance and so we were OK with th

Re: [lustre-discuss] ldiskfs patch rejected Rocky 8.6

2023-07-21 Thread Peter Bortas via lustre-discuss
Hi Jon, We have been running ZFS MDSs for 10-ish years, and while I'm sure there is still some performance to be gained by using ldiskfs due to among other things the missing support for ZFS ZIL, it's worth it to get rid of the of the absolute BS mess that ldiskfs is. The big performance hits wher

[lustre-discuss] ldiskfs patch rejected Rocky 8.6

2023-07-21 Thread Jon Marshall via lustre-discuss
Hi, Me again! I now have a successful build of lustre server with ZFS support, thanks all for your help with this. I'm now stuck on getting ldiskfs support to work - specifically it requires the kernel-debuginfo and debugsource packages, which on Rocky 8.6, don't exist for the kernel I'm buildi