Hi! I'd like to point out (to Lars) that i also (like Martin did) see many syslog messages in SLES11 SP2 (+ Updates) when using clvmd: I created a physical volume in a 5-node-cluster, and one host logged: Aug 21 14:08:38 o1 clvmd[30725]: Got new connection on fd 5 Aug 21 14:08:38 o1 clvmd[30725]: Read on local socket 5, len = 31 Aug 21 14:08:38 o1 clvmd[30725]: check_all_clvmds_running Aug 21 14:08:38 o1 clvmd[30725]: down_callback. node 537072812, state = 3 Aug 21 14:08:38 o1 clvmd[30725]: down_callback. node 587404460, state = 3 Aug 21 14:08:38 o1 clvmd[30725]: down_callback. node 520295596, state = 3 Aug 21 14:08:38 o1 clvmd[30725]: down_callback. node 553850028, state = 3 Aug 21 14:08:38 o1 clvmd[30725]: down_callback. node 570627244, state = 3 Aug 21 14:08:38 o1 clvmd[30725]: creating pipe, [12, 13] Aug 21 14:08:38 o1 clvmd[30725]: Creating pre&post thread Aug 21 14:08:38 o1 clvmd[30725]: Created pre&post thread, state = 0 Aug 21 14:08:38 o1 clvmd[30725]: in sub thread: client = 0x6bee00 Aug 21 14:08:38 o1 clvmd[30725]: Sub thread ready for work. Aug 21 14:08:38 o1 clvmd[30725]: doing PRE command LOCK_VG 'P_#orphans' at 4 (client=0x6bee00) Aug 21 14:08:38 o1 clvmd[30725]: lock_resource 'P_#orphans', flags=0, mode=4 Aug 21 14:08:38 o1 clvmd[30725]: lock_resource returning 0, lock_id=f70001 Aug 21 14:08:38 o1 clvmd[30725]: Writing status 0 down pipe 13 Aug 21 14:08:38 o1 clvmd[30725]: Waiting to do post command - state = 0 Aug 21 14:08:38 o1 clvmd[30725]: read on PIPE 12: 4 bytes: status: 0 Aug 21 14:08:38 o1 clvmd[30725]: background routine status was 0, sock_client=0x6bee00 Aug 21 14:08:38 o1 clvmd[30725]: distribute command: XID = 0 Aug 21 14:08:38 o1 clvmd[30725]: num_nodes = 5 Aug 21 14:08:38 o1 clvmd[30725]: add_to_lvmqueue: cmd=0x6bf4a0. client=0x6bee00, msg=0x6bef10, len=31, csid=(nil), xid=0 Aug 21 14:08:38 o1 clvmd[30725]: Sending message to all cluster nodes Aug 21 14:08:38 o1 clvmd[30725]: process_work_item: local Aug 21 14:08:38 o1 clvmd[30725]: process_local_command: LOCK_VG (0x33) msg=0x6bf 240, msglen =31, client=0x6bee00 Aug 21 14:08:38 o1 clvmd[30725]: do_lock_vg: resource 'P_#orphans', cmd = 0x4 LC K_VG (WRITE|VG), flags = 0x4 ( DMEVENTD_MONITOR ), memlock = 0 Aug 21 14:08:38 o1 clvmd[30725]: Invalidating cached metadata for VG #orphans Aug 21 14:08:38 o1 clvmd[30725]: Reply from node 1f0314ac: 0 bytes Aug 21 14:08:38 o1 clvmd[30725]: Got 1 replies, expecting: 5 Aug 21 14:08:38 o1 clvmd[30725]: LVM thread waiting for work Aug 21 14:08:38 o1 clvmd[30725]: 520295596 got message from nodeid 520295596 for 0. len 31 Aug 21 14:08:38 o1 clvmd[30725]: 520295596 got message from nodeid 537072812 for 520295596. len 18 Aug 21 14:08:38 o1 clvmd[30725]: Reply from node 200314ac: 0 bytes Aug 21 14:08:38 o1 clvmd[30725]: Got 2 replies, expecting: 5 Aug 21 14:08:38 o1 clvmd[30725]: 520295596 got message from nodeid 553850028 for 520295596. len 18 Aug 21 14:08:38 o1 clvmd[30725]: Reply from node 210314ac: 0 bytes Aug 21 14:08:38 o1 clvmd[30725]: Got 3 replies, expecting: 5 Aug 21 14:08:38 o1 clvmd[30725]: 520295596 got message from nodeid 570627244 for 520295596. len 18 Aug 21 14:08:38 o1 clvmd[30725]: Reply from node 220314ac: 0 bytes Aug 21 14:08:38 o1 clvmd[30725]: Got 4 replies, expecting: 5 Aug 21 14:08:38 o1 clvmd[30725]: 520295596 got message from nodeid 587404460 for 520295596. len 18 Aug 21 14:08:38 o1 clvmd[30725]: Reply from node 230314ac: 0 bytes Aug 21 14:08:38 o1 clvmd[30725]: Got 5 replies, expecting: 5 Aug 21 14:08:38 o1 clvmd[30725]: Got post command condition... Aug 21 14:08:38 o1 clvmd[30725]: Waiting for next pre command Aug 21 14:08:38 o1 clvmd[30725]: read on PIPE 12: 4 bytes: status: 0 Aug 21 14:08:38 o1 clvmd[30725]: background routine status was 0, sock_client=x6bee00 Aug 21 14:08:38 o1 clvmd[30725]: Send local reply Aug 21 14:08:38 o1 clvmd[30725]: Read on local socket 5, len = 31 Aug 21 14:08:38 o1 clvmd[30725]: check_all_clvmds_running Aug 21 14:08:38 o1 clvmd[30725]: down_callback. node 537072812, state = 3 Aug 21 14:08:38 o1 clvmd[30725]: down_callback. node 587404460, state = 3 Aug 21 14:08:38 o1 clvmd[30725]: down_callback. node 520295596, state = 3 Aug 21 14:08:38 o1 clvmd[30725]: down_callback. node 553850028, state = 3 Aug 21 14:08:38 o1 clvmd[30725]: down_callback. node 570627244, state = 3 Aug 21 14:08:38 o1 clvmd[30725]: Got pre command condition... Aug 21 14:08:38 o1 clvmd[30725]: doing PRE command LOCK_VG 'P_#orphans' at 6 (client=0x6bee00) Aug 21 14:08:38 o1 clvmd[30725]: unlock_resource: P_#orphans lockid: f70001 Aug 21 14:08:38 o1 clvmd[30725]: Writing status 0 down pipe 13 Aug 21 14:08:38 o1 clvmd[30725]: Waiting to do post command - state = 0 Aug 21 14:08:38 o1 clvmd[30725]: read on PIPE 12: 4 bytes: status: 0 Aug 21 14:08:38 o1 clvmd[30725]: background routine status was 0, sock_client=0x6bee00 Aug 21 14:08:38 o1 clvmd[30725]: distribute command: XID = 1 Aug 21 14:08:38 o1 clvmd[30725]: num_nodes = 5 Aug 21 14:08:38 o1 clvmd[30725]: add_to_lvmqueue: cmd=0x6bf470. client=0x6bee00, msg=0x6bef10, len=31, csid=(nil), xid=1 Aug 21 14:08:38 o1 clvmd[30725]: Sending message to all cluster nodes Aug 21 14:08:38 o1 clvmd[30725]: process_work_item: local Aug 21 14:08:38 o1 clvmd[30725]: process_local_command: LOCK_VG (0x33) msg=0x6bf210, msglen =31, client=0x6bee00 Aug 21 14:08:38 o1 clvmd[30725]: do_lock_vg: resource 'P_#orphans', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x4 ( DMEVENTD_MONITOR ), memlock = 0 Aug 21 14:08:38 o1 clvmd[30725]: Invalidating cached metadata for VG #orphans Aug 21 14:08:38 o1 clvmd[30725]: Reply from node 1f0314ac: 0 bytes Aug 21 14:08:38 o1 clvmd[30725]: Got 1 replies, expecting: 5 Aug 21 14:08:38 o1 clvmd[30725]: LVM thread waiting for work Aug 21 14:08:38 o1 clvmd[30725]: 520295596 got message from nodeid 520295596 for 0. len 31 Aug 21 14:08:38 o1 clvmd[30725]: 520295596 got message from nodeid 537072812 for 520295596. len 18 Aug 21 14:08:38 o1 clvmd[30725]: Reply from node 200314ac: 0 bytes Aug 21 14:08:38 o1 clvmd[30725]: Got 2 replies, expecting: 5 Aug 21 14:08:38 o1 clvmd[30725]: 520295596 got message from nodeid 553850028 for 520295596. len 18 Aug 21 14:08:38 o1 clvmd[30725]: Reply from node 210314ac: 0 bytes Aug 21 14:08:38 o1 clvmd[30725]: Got 3 replies, expecting: 5 Aug 21 14:08:38 o1 clvmd[30725]: 520295596 got message from nodeid 570627244 for 520295596. len 18 Aug 21 14:08:38 o1 clvmd[30725]: Reply from node 220314ac: 0 bytes Aug 21 14:08:38 o1 clvmd[30725]: Got 4 replies, expecting: 5 Aug 21 14:08:38 o1 clvmd[30725]: 520295596 got message from nodeid 587404460 for 520295596. len 18 Aug 21 14:08:38 o1 clvmd[30725]: Reply from node 230314ac: 0 bytes Aug 21 14:08:38 o1 clvmd[30725]: Got 5 replies, expecting: 5 Aug 21 14:08:38 o1 clvmd[30725]: Got post command condition... Aug 21 14:08:38 o1 clvmd[30725]: Waiting for next pre command Aug 21 14:08:38 o1 clvmd[30725]: read on PIPE 12: 4 bytes: status: 0 Aug 21 14:08:38 o1 clvmd[30725]: background routine status was 0, sock_client=0x 6bee00 Aug 21 14:08:38 o1 clvmd[30725]: Send local reply Aug 21 14:08:38 o1 clvmd[30725]: Read on local socket 5, len = 0 Aug 21 14:08:38 o1 clvmd[30725]: EOF on local socket: inprogress=0 Aug 21 14:08:38 o1 clvmd[30725]: Waiting for child thread Aug 21 14:08:38 o1 clvmd[30725]: Got pre command condition... Aug 21 14:08:38 o1 clvmd[30725]: Subthread finished Aug 21 14:08:38 o1 clvmd[30725]: Joined child thread Aug 21 14:08:38 o1 clvmd[30725]: ret == 0, errno = 0. removing client Aug 21 14:08:38 o1 clvmd[30725]: add_to_lvmqueue: cmd=0x6bef10. client=0x6bee00, msg=(nil), len=0, csid=(nil), xid=1 Aug 21 14:08:38 o1 clvmd[30725]: process_work_item: free fd -1 Aug 21 14:08:38 o1 clvmd[30725]: LVM thread waiting for work
Maybe I'm expecting too much, but isn't it possible to simply log "Telling other nodes that PV blabla is being created"? lvm2-2.02.84-3.33.1 lvm2-clvm-2.02.84-3.25.30 cmirrord-2.02.84-0.7.37 Regards, Ulrich _______________________________________________ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems