Hi.
Dec 29 13:47:16 s1 LVM(vg1)[1601]: WARNING: LVM Volume cluvg1 is not
available (stopped)
Dec 29 13:47:16 s1 crmd[1515]: notice: process_lrm_event: Operation
vg1_monitor_0: not running (node=
s1, call=23, rc=7, cib-update=40, confirmed=true)
Dec 29 13:47:16 s1 crmd[1515]: notice:
by the way.. just to note that.. for a normal testing (manual failover,
rebooting the active node)... the cluster is working fine. I only encounter
this error if I try to poweroff/shutoff the active node.
On Mon, Dec 29, 2014 at 4:05 PM, Marlon Guao marlon.g...@gmail.com wrote:
Hi.
Dec 29
please use pastebin and show your whole logs
2014-12-29 9:06 GMT+01:00 Marlon Guao marlon.g...@gmail.com:
by the way.. just to note that.. for a normal testing (manual failover,
rebooting the active node)... the cluster is working fine. I only encounter
this error if I try to poweroff/shutoff
hi,
uploaded it here.
http://susepaste.org/45413433
thanks.
On Mon, Dec 29, 2014 at 5:09 PM, Marlon Guao marlon.g...@gmail.com wrote:
Ok, i attached the log file of one of the nodes.
On Mon, Dec 29, 2014 at 4:42 PM, emmanuel segura emi2f...@gmail.com
wrote:
please use pastebin and show
Sorry,
But your paste is empty.
2014-12-29 10:19 GMT+01:00 Marlon Guao marlon.g...@gmail.com:
hi,
uploaded it here.
http://susepaste.org/45413433
thanks.
On Mon, Dec 29, 2014 at 5:09 PM, Marlon Guao marlon.g...@gmail.com wrote:
Ok, i attached the log file of one of the nodes.
On
ok, sorry for that.. please use this instead.
http://pastebin.centos.org/14771/
thanks.
On Mon, Dec 29, 2014 at 5:25 PM, emmanuel segura emi2f...@gmail.com wrote:
Sorry,
But your paste is empty.
2014-12-29 10:19 GMT+01:00 Marlon Guao marlon.g...@gmail.com:
hi,
uploaded it here.
Hi,
You have a problem with the cluster stonithd:error: crm_abort:
crm_glib_handler: Forked child 6186 to record non-fatal assert at
logging.c:73
Try to post your cluster version(packages), maybe someone can tell you
if this is a known bug or new.
2014-12-29 10:29 GMT+01:00 Marlon Guao
Dec 27 15:38:00 s1 cib[1514]:error: crm_xml_err: XML Error:
Permission deniedPermission deniedI/O warning : failed to load
external entity /var/lib/pacemaker/cib/cib.xml
Dec 27 15:38:00 s1 cib[1514]:error: write_cib_contents: Cannot
link /var/lib/pacemaker/cib/cib.xml to
hmm.. but as far as I can see, looks like that messages can still be
ignored. My original problem is that.. LVM resource agent doesn't try to
activate the VG on the passive node if the active node goes power off.
On Mon, Dec 29, 2014 at 5:33 PM, emmanuel segura emi2f...@gmail.com wrote:
Hi,
perhaps, we need to focus on this message. as mentioned.. the cluster is
working fine under normal circumstances. my only concern is that, LVM
resource agent doesn't try to re-activate the VG on the passive node when
the active node goes down ungracefully (powered off). Hence, it could not
mount
Hi,
ah yeah.. tried to poweroff the active node.. and tried pvscan on the
passive.. and yes.. it didn't worked --- it doesn't return to the shell.
So, the problem is on DLM?
On Mon, Dec 29, 2014 at 5:51 PM, emmanuel segura emi2f...@gmail.com wrote:
Power off the active node and after one
Dlm isn't the problem, but i think is your fencing, when you powered
off the active node, the dead remain in unclean state? can you show me
your sbd timeouts? sbd -d /dev/path_of_your_device dump
Thanks
2014-12-29 11:02 GMT+01:00 Marlon Guao marlon.g...@gmail.com:
Hi,
ah yeah.. tried to
https://bugzilla.redhat.com/show_bug.cgi?id=1127289#c4
https://bugzilla.redhat.com/show_bug.cgi?id=1127289
2014-12-29 11:57 GMT+01:00 Marlon Guao marlon.g...@gmail.com:
here it is..
==Dumping header on disk /dev/mapper/sbd
Header version : 2.1
UUID :
looks like it's similar to this as well.
http://comments.gmane.org/gmane.linux.highavailability.pacemaker/22398
but, could it be because, clvm is not activating the vg on the passive
node, because it's waiting for quorum?
seeing this on the log as well.
Dec 29 21:18:09 s2 dlm_controld[1776]:
you have quorum-policy=ignore, in the thread you posted:
Nov 24 09:52:10 nebula3 dlm_controld[6263]: 566 datastores wait for fencing
Nov 24 09:52:10 nebula3 dlm_controld[6263]: 566 clvmd wait for fencing
Nov 24 09:55:10 nebula3 dlm_controld[6263]: 747 fence status
1084811078 receive -125 from
Hi,
just want to ask regarding the LVM resource agent on pacemaker/corosync.
I setup 2 nodes cluster (opensuse13.2 -- my config below). The cluster
works as expected, like doing a manual failover (via crm resource move),
and automatic failover (by rebooting the active node for instance). But, if
logs?
2014-12-29 6:54 GMT+01:00 Marlon Guao marlon.g...@gmail.com:
Hi,
just want to ask regarding the LVM resource agent on pacemaker/corosync.
I setup 2 nodes cluster (opensuse13.2 -- my config below). The cluster
works as expected, like doing a manual failover (via crm resource move),
17 matches
Mail list logo