Re: [lustre-discuss] Lustre and server upgrade

2021-11-19 Thread Andreas Dilger via lustre-discuss
Dean, it should be emphasized that "llmount.sh" and "llmountcleanup.sh" are for quickly formatting and mounting *TEST* filesystems. They only create a few small (400MB) loopback files in /tmp and format them as OSTs and MDTs. This should *NOT* be used on a production system, or you will be

Re: [lustre-discuss] Lustre and server upgrade

2021-11-19 Thread Colin Faber via lustre-discuss
Hi Dean, Glad to hear you were able to clean up, sounds like you've also been successful in your vm trial, I would suggest at this point that you take a close look at your installation and verify that all of the needed packages are installed correctly. The fact that it's complaining about

Re: [lustre-discuss] Lustre and server upgrade

2021-11-19 Thread STEPHENS, DEAN - US via lustre-discuss
I also figure out how to clean up after the llmount.sh script is run. There is a llmountcleanup.sh that will do that. Dean From: STEPHENS, DEAN - US Sent: Friday, November 19, 2021 1:08 PM To: Colin Faber Cc: lustre-discuss@lists.lustre.org Subject: RE: [lustre-discuss] Lustre and server

Re: [lustre-discuss] Lustre and server upgrade

2021-11-19 Thread STEPHENS, DEAN - US via lustre-discuss
One more thing that I have noticed using the llmount.sh script, the directories that were created by the script under /mnt have 000 set for the permissions. The ones that I have configure under /mnt/lustre are set to 750 permissions. Is this something that needs to be fixed. I have these server

Re: [lustre-discuss] Lustre and server upgrade

2021-11-19 Thread STEPHENS, DEAN - US via lustre-discuss
The other question that I have is how to clean up after the llmount.sh has been run? If I do a df on the server I see that mds1, osd1, and ost2 are still mounted to /mnt. Do I need to manually umount them since the llmount.sh completed successfully? Also I have not done anything to my MDS node

Re: [lustre-discuss] Lustre and server upgrade

2021-11-19 Thread STEPHENS, DEAN - US via lustre-discuss
Thanks for the help yesterday and I was able to install the Lustre kernel and software on a VM to include the test RPM. This is what I did following these directions: Installed the Lustre kernel and

Re: [lustre-discuss] OST's wating fro client on a pcs cluster

2021-11-19 Thread Meijering, Koos via lustre-discuss
One more addition, I also the following message on the oss who had the ost before the failover: Nov 19 12:43:59 dh4-oss01 kernel: LustreError: 137-5: muse-OST0001_UUID: not available for connect from 172.23.53.214@o2ib4 (no target). If you are running an HA pair check that the target is mounted on

Re: [lustre-discuss] OST's wating fro client on a pcs cluster

2021-11-19 Thread Meijering, Koos via lustre-discuss
Hi Colin, I've added here 3 log file 1 from the metadata and 2 from the object stores. Before this logs started the filesystem was working, then I requested the cluster to failover muse-OST0001 from oss01 to oss02. On Thu, 18 Nov 2021 at 17:11, Colin Faber wrote: > Hi Koos, > > First thing --