Re: [lustre-discuss] EL9

2023-05-03 Thread Peter Jones via lustre-discuss
Yes. We will officially support RHEL9.x servers in Lustre 2.16.

On 2023-05-03, 8:52 AM, "lustre-discuss on behalf of Lana Deere via 
lustre-discuss" mailto:lustre-discuss-boun...@lists.lustre.org> on behalf of 
lustre-discuss@lists.lustre.org > wrote:


Is there any update on the status of Lustre server support on EL9?


.. Lana (lana.de...@gmail.com )
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org 
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org 




___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] question mark when listing file after the upgrade

2023-05-03 Thread Andreas Dilger via lustre-discuss
This looks like https://jira.whamcloud.com/browse/LU-16655 causing problems 
after the upgrade from 2.12.x to 2.15.[012] breaking the Object Index files.

A patch for this has already been landed to b2_15 and will be included in 
2.15.3. If you've hit this issue, then you need to backup/delete the OI files 
(off of Lustre) and run OI Scrub to rebuild them.

I believe the OI Scrub/rebuild is described in the Lustre Manual.

Cheers, Andreas

On May 3, 2023, at 09:30, Colin Faber via lustre-discuss 
 wrote:


Hi,

What does your client log indicate? (dmesg / syslog)

On Wed, May 3, 2023, 7:32 AM Jane Liu via lustre-discuss 
mailto:lustre-discuss@lists.lustre.org>> wrote:
Hello,

I'm writing to ask for your help on one issue we observed after a major
upgrade of a large Lustre system from RHEL7 + 2.12.9 to RHEL8 + 2.15.2.
Basically we preserved MDT disk (VDisk on a VM) and also all OST disk
(JBOD) in RHEL7 and then reinstalled RHEL8 OS and then attached those
preserved disks to RHEL8 OS. However, I met an issue after the OS
upgrade and lustre installation.

I believe the issue is related to metadata.

The old MDS was a virtual machine, and the MDT vdisk was preserved
during the upgrade. When a new VM was created with the same hostname and
IP, the preserved MDT vdisk was attached to it. Everything seemed fine
initially. However, after the client mount was completed, the file
listing displayed question marks, as shown below:

[root@experimds01 ~]# mount -t lustre 11.22.33.44@tcp:/experi01
/mntlustre/
[root@experimds01 ~]# cd /mntlustre/
[root@experimds01 mntlustre]# ls -l
ls: cannot access 'experipro': No such file or directory
ls: cannot access 'admin': No such file or directory
ls: cannot access 'test4': No such file or directory
ls: cannot access 'test3': No such file or directory
total 0
d? ? ? ? ?? admin
d? ? ? ? ?? experipro
-? ? ? ? ?? test3
-? ? ? ? ?? test4

I shut down the MDT and ran "e2fsck -p
/dev/mapper/experimds01-experimds01". It reported "primary superblock
features different from
  backup, check forced."

[root@experimds01 ~]# e2fsck -p /dev/mapper/experimds01-experimds01
experi01-MDT primary superblock features different from backup,
check forced.
experi01-MDT: 9493348/429444224 files (0.5% non-contiguous),
109369520/268428864 blocks

Running e2fsck again showed that the filesystem was clean.
[root@experimds01 /]# e2fsck -p /dev/mapper/experimds01-experimds01
experi01-MDT: clean, 9493378/429444224 files, 109369610/268428864
blocks

However, the issue persisted. The file listing continued to display
question marks.

Do you have any idea what could be causing this problem and how to fix
it? By the way, I have an e2image backup of the MDT from the
RHEL7 system just in case we need fix it using the backup. Also, after
the upgrade, the command "lfs df" shows that all OSTs and MDT
  are fine.

Thank you in advance for your assistance.

Best regards,
Jane
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] question mark when listing file after the upgrade

2023-05-03 Thread Colin Faber via lustre-discuss
Hi,

What does your client log indicate? (dmesg / syslog)

On Wed, May 3, 2023, 7:32 AM Jane Liu via lustre-discuss <
lustre-discuss@lists.lustre.org> wrote:

> Hello,
>
> I'm writing to ask for your help on one issue we observed after a major
> upgrade of a large Lustre system from RHEL7 + 2.12.9 to RHEL8 + 2.15.2.
> Basically we preserved MDT disk (VDisk on a VM) and also all OST disk
> (JBOD) in RHEL7 and then reinstalled RHEL8 OS and then attached those
> preserved disks to RHEL8 OS. However, I met an issue after the OS
> upgrade and lustre installation.
>
> I believe the issue is related to metadata.
>
> The old MDS was a virtual machine, and the MDT vdisk was preserved
> during the upgrade. When a new VM was created with the same hostname and
> IP, the preserved MDT vdisk was attached to it. Everything seemed fine
> initially. However, after the client mount was completed, the file
> listing displayed question marks, as shown below:
>
> [root@experimds01 ~]# mount -t lustre 11.22.33.44@tcp:/experi01
> /mntlustre/
> [root@experimds01 ~]# cd /mntlustre/
> [root@experimds01 mntlustre]# ls -l
> ls: cannot access 'experipro': No such file or directory
> ls: cannot access 'admin': No such file or directory
> ls: cannot access 'test4': No such file or directory
> ls: cannot access 'test3': No such file or directory
> total 0
> d? ? ? ? ?? admin
> d? ? ? ? ?? experipro
> -? ? ? ? ?? test3
> -? ? ? ? ?? test4
>
> I shut down the MDT and ran "e2fsck -p
> /dev/mapper/experimds01-experimds01". It reported "primary superblock
> features different from
>   backup, check forced."
>
> [root@experimds01 ~]# e2fsck -p /dev/mapper/experimds01-experimds01
> experi01-MDT primary superblock features different from backup,
> check forced.
> experi01-MDT: 9493348/429444224 files (0.5% non-contiguous),
> 109369520/268428864 blocks
>
> Running e2fsck again showed that the filesystem was clean.
> [root@experimds01 /]# e2fsck -p /dev/mapper/experimds01-experimds01
> experi01-MDT: clean, 9493378/429444224 files, 109369610/268428864
> blocks
>
> However, the issue persisted. The file listing continued to display
> question marks.
>
> Do you have any idea what could be causing this problem and how to fix
> it? By the way, I have an e2image backup of the MDT from the
> RHEL7 system just in case we need fix it using the backup. Also, after
> the upgrade, the command "lfs df" shows that all OSTs and MDT
>   are fine.
>
> Thank you in advance for your assistance.
>
> Best regards,
> Jane
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] EL9

2023-05-03 Thread Lana Deere via lustre-discuss
Is there any update on the status of Lustre server support on EL9?

.. Lana (lana.de...@gmail.com)
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] question mark when listing file after the upgrade

2023-05-03 Thread Jane Liu via lustre-discuss

Hello,

I'm writing to ask for your help on one issue we observed after a major 
upgrade of a large Lustre system from RHEL7 + 2.12.9 to RHEL8 + 2.15.2. 
Basically we preserved MDT disk (VDisk on a VM) and also all OST disk 
(JBOD) in RHEL7 and then reinstalled RHEL8 OS and then attached those 
preserved disks to RHEL8 OS. However, I met an issue after the OS 
upgrade and lustre installation.


I believe the issue is related to metadata.

The old MDS was a virtual machine, and the MDT vdisk was preserved 
during the upgrade. When a new VM was created with the same hostname and 
IP, the preserved MDT vdisk was attached to it. Everything seemed fine 
initially. However, after the client mount was completed, the file 
listing displayed question marks, as shown below:


[root@experimds01 ~]# mount -t lustre 11.22.33.44@tcp:/experi01 
/mntlustre/

[root@experimds01 ~]# cd /mntlustre/
[root@experimds01 mntlustre]# ls -l
ls: cannot access 'experipro': No such file or directory
ls: cannot access 'admin': No such file or directory
ls: cannot access 'test4': No such file or directory
ls: cannot access 'test3': No such file or directory
total 0
d? ? ? ? ?? admin
d? ? ? ? ?? experipro
-? ? ? ? ?? test3
-? ? ? ? ?? test4

I shut down the MDT and ran "e2fsck -p 
/dev/mapper/experimds01-experimds01". It reported "primary superblock 
features different from

 backup, check forced."

[root@experimds01 ~]# e2fsck -p /dev/mapper/experimds01-experimds01
experi01-MDT primary superblock features different from backup, 
check forced.
experi01-MDT: 9493348/429444224 files (0.5% non-contiguous), 
109369520/268428864 blocks


Running e2fsck again showed that the filesystem was clean.
[root@experimds01 /]# e2fsck -p /dev/mapper/experimds01-experimds01
experi01-MDT: clean, 9493378/429444224 files, 109369610/268428864 
blocks


However, the issue persisted. The file listing continued to display 
question marks.


Do you have any idea what could be causing this problem and how to fix 
it? By the way, I have an e2image backup of the MDT from the
RHEL7 system just in case we need fix it using the backup. Also, after 
the upgrade, the command "lfs df" shows that all OSTs and MDT

 are fine.

Thank you in advance for your assistance.

Best regards,
Jane
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre kernel space or user space

2023-05-03 Thread Tancheff, Shaun via lustre-discuss
A bit dated but should give a reasonable overview:
https://wiki.lustre.org/images/d/da/LUG08-Lustre-NFS.pdf

I would prefer NFSv4 over v3.
Expect that NFS performance will be less than Lustre native.

From: Nick dan 
Date: Wednesday, May 3, 2023 at 5:14 PM
To: "Tancheff, Shaun" , 
"lustre-discuss-requ...@lists.lustre.org" 
, 
"lustre-discuss-ow...@lists.lustre.org" 
, "lustre-discuss@lists.lustre.org" 

Subject: Re: [lustre-discuss] Lustre kernel space or user space

Hi

Can you explain in detail how Lustre can be used with NFS? Do you mean that the 
storage will be lustre and client will be mounted with NFS? Will this affect 
the performance?

Thanks and regards
Nick

On Wed, 3 May 2023 at 14:30, Tancheff, Shaun 
mailto:shaun.tanch...@hpe.com>> wrote:
Lustre is an in-kernel file system.

I am not aware of a FUSE mountable Lustre, it would be very likely to have an 
unacceptable performance profile.

There is also method to re-export as an NFS share to enable NFS clients access 
to data stored on Lustre, an example use case is when client OS does not 
support Lustre but does support NFS.

From: lustre-discuss 
mailto:lustre-discuss-boun...@lists.lustre.org>>
 on behalf of Nick dan via lustre-discuss 
mailto:lustre-discuss@lists.lustre.org>>
Reply-To: Nick dan mailto:nickdan2...@gmail.com>>
Date: Wednesday, May 3, 2023 at 2:04 PM
To: 
"lustre-discuss-ow...@lists.lustre.org"
 
mailto:lustre-discuss-ow...@lists.lustre.org>>,
 
"lustre-discuss-requ...@lists.lustre.org"
 
mailto:lustre-discuss-requ...@lists.lustre.org>>,
 "lustre-discuss@lists.lustre.org" 
mailto:lustre-discuss@lists.lustre.org>>
Subject: [lustre-discuss] Lustre kernel space or user space

Hi

I had a few questions.
1. Is lustre storage and client mounted in kernel space or user space?
2. Can lustre be mounted with fuse? What is the use of mounting lustre with 
fuse?

Thanks and regards,
Nick
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre kernel space or user space

2023-05-03 Thread Nick dan via lustre-discuss
Hi

Can you explain in detail how Lustre can be used with NFS? Do you mean that
the storage will be lustre and client will be mounted with NFS? Will this
affect the performance?

Thanks and regards
Nick

On Wed, 3 May 2023 at 14:30, Tancheff, Shaun  wrote:

> Lustre is an in-kernel file system.
>
>
>
> I am not aware of a FUSE mountable Lustre, it would be very likely to have
> an unacceptable performance profile.
>
>
>
> There is also method to re-export as an NFS share to enable NFS clients
> access to data stored on Lustre, an example use case is when client OS does
> not support Lustre but does support NFS.
>
>
>
> *From: *lustre-discuss  on
> behalf of Nick dan via lustre-discuss 
> *Reply-To: *Nick dan 
> *Date: *Wednesday, May 3, 2023 at 2:04 PM
> *To: *"lustre-discuss-ow...@lists.lustre.org" <
> lustre-discuss-ow...@lists.lustre.org>, "
> lustre-discuss-requ...@lists.lustre.org" <
> lustre-discuss-requ...@lists.lustre.org>, "lustre-discuss@lists.lustre.org"
> 
> *Subject: *[lustre-discuss] Lustre kernel space or user space
>
>
>
> Hi
>
>
>
> I had a few questions.
>
> 1. Is lustre storage and client mounted in kernel space or user space?
>
> 2. Can lustre be mounted with fuse? What is the use of mounting lustre
> with fuse?
>
>
>
> Thanks and regards,
>
> Nick
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre kernel space or user space

2023-05-03 Thread Tancheff, Shaun via lustre-discuss
Lustre is an in-kernel file system.

I am not aware of a FUSE mountable Lustre, it would be very likely to have an 
unacceptable performance profile.

There is also method to re-export as an NFS share to enable NFS clients access 
to data stored on Lustre, an example use case is when client OS does not 
support Lustre but does support NFS.

From: lustre-discuss  on behalf of 
Nick dan via lustre-discuss 
Reply-To: Nick dan 
Date: Wednesday, May 3, 2023 at 2:04 PM
To: "lustre-discuss-ow...@lists.lustre.org" 
, 
"lustre-discuss-requ...@lists.lustre.org" 
, "lustre-discuss@lists.lustre.org" 

Subject: [lustre-discuss] Lustre kernel space or user space

Hi

I had a few questions.
1. Is lustre storage and client mounted in kernel space or user space?
2. Can lustre be mounted with fuse? What is the use of mounting lustre with 
fuse?

Thanks and regards,
Nick
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Lustre kernel space or user space

2023-05-03 Thread Nick dan via lustre-discuss
Hi

I had a few questions.
1. Is lustre storage and client mounted in kernel space or user space?
2. Can lustre be mounted with fuse? What is the use of mounting lustre
with fuse?

Thanks and regards,
Nick
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org