Hi,
We found out a solution to address the issues of directory listing
hang-ups and unresponsive OSS. There seemed metadata inconsistency in
the MDT. While the on-the-fly "lctl lfsck_start ..." would not work, we
tried the offline ext4 level fix. We unmounted the MDT and ran the
e2fsck
>>> On Mon, 22 May 2023 13:08:19 +0530, Nick dan via lustre-discuss
>>> said:
> Hi I had one doubt. In lustre, data is divided into stripes
> and stored in multiple OSTs. So each OST will have some part
> of data. My question is if one OST fails, will there be data
> loss?
This is extensively
>>> On Thu, 27 Apr 2023 10:20:54 +0100, Peter Grandi
>>> said:
>> - When I started this system I tried to backup MDT data
>> - without succes.
> [...] The ZFS version of MDT can do filesystem-level snapshots
> when mounted as 'zfs' instead of 'lustre'.
Just to be sure, even if this was already
Hi
Thank you for your reply
Yes, the OSTs must provide internal redundancy - RAID-6 typically
Can RAID_6 be replaced with mirror/RAID0?
Which type of RAID is recommended for MDT and OST?
Also can you brief on how data will be read/written in Lustre with ZFS is
used as backend filesystem in
Yes, the OSTs must provide internal redundancy - RAID-6 typically.
There is File Level Redundancy (FLR = mirroring) possible in Lustre file
layouts, but it is "unmanaged", so users or other system-level tools are
required to resync FLR files if they are written after mirroring.
Cheers,
Hi
I had one doubt.
In lustre, data is divided into stripes and stored in multiple OSTs. So
each OST will have some part of data.
My question is if one OST fails, will there be data loss?
Please advise for the same.
Thanks and regards
Nick
___