Hi list,

Is it normal for a fresh install that CLVM says that the locking is
disabled while locking_type is set to 3 in lvm.conf ?

[root@myhost ~]# clvmd -d
CLVMD[e2bd6170]: May 18 12:42:24 CLVMD started
CLVMD[e2bd6170]: May 18 12:42:24 Connected to CMAN
CLVMD[e2bd6170]: May 18 12:42:24 CMAN initialisation complete
CLVMD[e2bd6170]: May 18 12:42:25 DLM initialisation complete
CLVMD[e2bd6170]: May 18 12:42:25 Cluster ready, doing some more
initialisation
CLVMD[e2bd6170]: May 18 12:42:25 starting LVM thread
CLVMD[e2bd6170]: May 18 12:42:25 clvmd ready for work
CLVMD[e2bd6170]: May 18 12:42:25 Using timeout of 60 seconds
CLVMD[42aa8940]: May 18 12:42:25 LVM thread function started
File descriptor 5 (/dev/zero) leaked on lvm invocation. Parent PID 6240:
clvmd
  WARNING: Locking disabled. Be careful! This could corrupt your
metadata.
CLVMD[42aa8940]: May 18 12:42:25 LVM thread waiting for work

I guess it's related to this following warning when trying to list the
vg's (while clvmd is up) :

[root@myhost ~]# vgs
  connect() failed on local socket: No such file or directory
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  VG   #PV #LV #SN Attr   VSize  VFree
  vg00   1   7   0 wz--n- 24.28G 10.44G

This prevents me from creating a clustered VG (actually I can create a
clustered VG, but not the LV inside).

[root@myhost ~]# vgcreate -c y vggfs01 /dev/sdb2
  connect() failed on local socket: No such file or directory
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  No physical volume label read from /dev/sdb2
  Physical volume "/dev/sdb2" successfully created
  Clustered volume group "vggfs01" successfully created
[root@myhost ~]# lvcreate -L 500M -n lvgfs01 vggfs01
  connect() failed on local socket: No such file or directory
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  Skipping clustered volume group vggfs01
[root@myhost ~]#

The final goal is to build a GFS shared storage between 3 nodes.
The cman part seems to be OK for the three nodes :

[root@lhnq501l ~]# cman_tool services
type             level name       id       state
fence            0     default    00010001 none
[1 2 3]
dlm              1     rgmanager  00020003 none
[1 2 3]
dlm              1     clvmd      00010003 none
[1 2 3]

This is my first RHEL cluster, and I'm not sure where to investigate
right now.
If anyone has ever seen this behaviour, any comment is appreciated,

Thanks,

- Ben.

Mise en garde concernant la confidentialite : Le present message, comprenant 
tout fichier qui y est joint, est envoye a l'intention exclusive de son 
destinataire; il est de nature confidentielle et peut constituer une 
information protegee par le secret professionnel. Si vous n'etes pas le 
destinataire, nous vous avisons que toute impression, copie, distribution ou 
autre utilisation de ce message est strictement interdite. Si vous avez recu ce 
courriel par erreur, veuillez en aviser immediatement l'expediteur par retour 
de courriel et supprimer le courriel. Merci! 

Confidentiality Warning: This message, including any attachment, is sent only 
for the use of the intended recipient; it is confidential and may constitute 
privileged information. If you are not the intended recipient, you are hereby 
notified that any printing, copying, distribution or other use of this message 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately by return email, and delete it. Thank you!

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to