Le 20/06/2013 12:23, Andrew Beekhof a écrit :
On 20/06/2013, at 6:51 PM, Thibaut Pouzet thibaut.pou...@lyra-network.com
wrote:
Le 19/06/2013 23:57, Andrew Beekhof a écrit :
On 20/06/2013, at 1:57 AM, Thibaut Pouzet thibaut.pou...@lyra-network.com
wrote:
Hi,
I am trying to configure
hi,
when only i remove or add resources, corosync starts to eat up all cpu.
drbd 8.4.1 (build from source)
corosync 1.4.1
pacemaker 1.1.8
crmsh 1.2.5 (this from extra repo, cause crm is missing in pacemaker-cli ?!
but it is not reason for trouble ! i use pcs except crm_mon )
pcs 0.9.26
when
19.06.2013, 10:19, Andrey Groshev gre...@yandex.ru:
I started experimenting.
Received the first incomprehensible situation:
There are three nodes. One of the quorum exists only, i.e. without a
installed pacemaker.
1. Run all the nodes - cluster is running. All rigth.
2. Disconnect of
maybe i asked this before, but i could not find message + answer.
when a resource gets unmanaged and the problems has gone, i want the
resource get managed by pacemaker again. what is to do ?
situation: only on node left (other ill)
drbd could not get promoted
now it is
(in addition)
i tried
pcs resource start ms_drbd # rc=0 ( but afaik this only removes
is-managed=false if exists. but this meta-attribute does not exist)
pcs resource manage ms_drbd # ms_drbd does not exist
pcs resource manage drbd# already managed
what kind of state is
i found corosync still running after it stopped printing dots when i called
`service corosync stop`
a new call succeeded and all pacemaker-services finished, too.
whats going on ?!
2013/6/21 andreas graeper agrae...@googlemail.com
(in addition)
i tried
pcs resource start ms_drbd #
On 2013-06-21T10:56:29, andreas graeper agrae...@googlemail.com wrote:
hi,
when only i remove or add resources, corosync starts to eat up all cpu.
drbd 8.4.1 (build from source)
corosync 1.4.1
yes, corosync 1.4.1 had one such error, I recall. If you're building
from source, why are you
On 2013-06-21T12:56:17, andreas graeper agrae...@googlemail.com wrote:
maybe i asked this before, but i could not find message + answer.
when a resource gets unmanaged and the problems has gone, i want the
resource get managed by pacemaker again. what is to do ?
situation: only on node
hi,
old version :
i shall maintain a centos63 with, except drbd (build from source), only
standard-repos are used.
for testing i installed newest centos64, but .. .
there is no chance to get rid of that centos63, but for learning/testing
what are the best distros ? not in general, but for use
i forgot to ask the more important:
i used cleanup for wiping out the rest, that was left after i stopped and
deleted a resource.
when the resource (unmanaged) is anyhow still there, than a cleanup would
let pacemaker try to manage that resource again and start all the other
depending ( order /
hi,
n1 active node is started and everything works fine, but after reboot n2
drbd is not started by pacemaker. when i start drbd manually, crm_mon shows
it as slave ( as if there were no problems).
maybe someone experienced can have a look into logs ?
thanks in advance
andreas
log.xz
Thank you for replying, Vladislav!
I think the problem should be unrelated to iSCSI, you have correct setup
(of course I did not thoroughly look through all info, but idea is
perfectly correct).
Thank you for confirming.
Did you turn caching off for your VMs disks?
That's a point. Indeed
21.06.2013 17:23, Sven Arnold wrote:
Thank you for replying, Vladislav!
I think the problem should be unrelated to iSCSI, you have correct setup
(of course I did not thoroughly look through all info, but idea is
perfectly correct).
Thank you for confirming.
Did you turn caching off
Hi Andreas,
my two cents to your questions:
a) If you want to learn most, take any distro and compile the components
from
source and afterwards use them. = Most learned.
b) I don't know how others think about it: But I use a cluster to try to
increase uptime.
If I know that a disto's
14 matches
Mail list logo