Hello All,
I'm trying configure solution where one resource will be able know where is the
second located.
I have configured one easy resource
primitive id=rs_ip_s2s class=ocf type=IPaddr provider=heartbeat
instance_attributes id=special-node1 score=5
rule
Hi
I have problem with starting order on node after manual migration.
My cluster consists of two nodes that provides bunch of kvm virtualized
hosts.
When I start node, first starting drbd, dlm (and the like) next libvirt
and the last hosts provided by VirtualDomain resource.
When I stop node,
On 06/15/2011 10:22 AM, Janec, Jozef wrote:
Hello All,
I'm trying configure solution where one resource will be able know where is
the second located.
I have configured one easy resource
primitive id=rs_ip_s2s class=ocf type=IPaddr provider=heartbeat
instance_attributes
Dnia Wed, 15 Jun 2011 10:26:36 +0200
Pawel Warowny w...@master.pl napisał(a):
Hi
Sorry, I forgot to change original node name in my example, so:
location vr_debian_squeeze_loc vr_debian_squeeze 100: lolek
in my previous post should be:
location vr_debian_squeeze_loc vr_debian_squeeze 100:
Hi,
this is my first post on this list. I hope I put my question to the
correct mailing-list.
I have installed Pacemaker/Corosync on two Ubuntu-Lucid Servers building
a two node cluster. This cluster shall become a router for a datacenter.
I installed the distribution provided packages. I guess
On Tuesday, June 14, 2011 07:17:41 Dejan Muhamedagic wrote:
Hi,
On Mon, Jun 13, 2011 at 03:30:03PM -0400, imnotpc wrote:
I've created a group containing the primary RA and MailTo as the second
resource. This works as exected and sends an e-mail when the primary
resource stops or starts.
On 14-06-11 22:35, Florian Haas wrote:
On 06/14/2011 01:38 PM, Jelle de Jong wrote:
# disk erros during iscsi/drbd migration on kvm host system
http://paste.debian.net/119830/
You need to either use portblock (check the guide I mentioned in my 2/24
message), or move the IP address to the
On 2011-06-15 15:08, Jelle de Jong wrote:
root@hennessy:~# dmesg
[56951.585704] device-mapper: snapshots: Snapshot is marked invalid.
[56951.590679] Buffer I/O error on device dm-24, logical block 0
..
[57077.664125] connection1:0: detected conn error (1020)
root@hennessy:~# lvscan
Hi Hideo-san,
On Wed, Jun 15, 2011 at 02:47:09PM +0900, renayama19661...@ybb.ne.jp wrote:
Hi all,
There is the message which does not comply with real operation by the message
of the crm command.
When an operator executes migrate command, crm should display unmigrate
in a message.
In
Hi,
On Wed, Jun 15, 2011 at 10:26:36AM +0200, Pawel Warowny wrote:
Hi
I have problem with starting order on node after manual migration.
My cluster consists of two nodes that provides bunch of kvm virtualized
hosts.
When I start node, first starting drbd, dlm (and the like) next libvirt
On Wednesday, June 15, 2011 12:18:52 Dejan Muhamedagic wrote:
Hi,
On Wed, Jun 15, 2011 at 06:52:21AM -0400, imnotpc wrote:
On Tuesday, June 14, 2011 07:17:41 Dejan Muhamedagic wrote:
Hi,
On Mon, Jun 13, 2011 at 03:30:03PM -0400, imnotpc wrote:
I've created a group containing
It may be a bit late given that you've just created your own script,
but you can grab the check-cluster (and maybe check-drbd) scripts from
the gno-cluster-tools package at ftp://ftp.gno.org/pub/tools/cluster-tools
If I have a cluster that is not otherwise monitored, I run those
every three hours
On Wednesday, June 15, 2011 15:40:31 Devin Reade wrote:
It may be a bit late given that you've just created your own script,
but you can grab the check-cluster (and maybe check-drbd) scripts from
the gno-cluster-tools package at
ftp://ftp.gno.org/pub/tools/cluster-tools
If I have a cluster
On Wed, Jun 15, 2011 at 12:24 PM, imnotpc imno...@rock3d.net wrote:
What I was thinking is that the DC is never fenced
Is this actually the case? It would sure explain the one gotcha I've
never been able to work around in a three node cluster with stonith/SBD. If
you unplug the network
Hello everybody,
I was doing some testing/experiments with stonith:meatware using the
following configuration: http://paste.debian.net/119991/
question 1: does somebody know if I should add a pingd location to the
meatware stonith (see configuration)
question 2: I had my resources running on
Hi,
On Wed, Jun 15, 2011 at 10:29:15PM +0200, Jelle de Jong wrote:
Hello everybody,
I was doing some testing/experiments with stonith:meatware using the
following configuration: http://paste.debian.net/119991/
question 1: does somebody know if I should add a pingd location to the
On Wed, Jun 15, 2011 at 03:26:56PM -0500, mark - pacemaker list wrote:
On Wed, Jun 15, 2011 at 12:24 PM, imnotpc imno...@rock3d.net wrote:
What I was thinking is that the DC is never fenced
Is this actually the case?
In a way it is true. Only DC can order fencing and there is
always
On Wednesday, June 15, 2011 16:26:56 mark - pacemaker list wrote:
On Wed, Jun 15, 2011 at 12:24 PM, imnotpc imno...@rock3d.net wrote:
What I was thinking is that the DC is never fenced
Is this actually the case? It would sure explain the one gotcha I've
never been able to work around in a
Dnia Wed, 15 Jun 2011 18:14:19 +0200
Dejan Muhamedagic deja...@fastmail.fm napisał(a):
Hi
Thank you for reply.
That's really odd. Is it that at this time vr_debian_squeeze
runs on node1?
No, vr_debian_squeeze doesn't run on node1, it starts on node2 as the
first resource despite of order and
On Wednesday, June 15, 2011 17:14:49 Dejan Muhamedagic wrote:
Hi,
On Wed, Jun 15, 2011 at 10:29:15PM +0200, Jelle de Jong wrote:
Hello everybody,
I was doing some testing/experiments with stonith:meatware using the
following configuration: http://paste.debian.net/119991/
question
Hi Dejan,
Thank you for a reply.
Many thanks for the patch. But we need a common procedure to
fetch and remove options which are used in many commands from
the list of arguments. Options such as force and quiet.
Right now I'm quite busy elsewhere, so that may take time ...
All right.
I
Hi all,
We are using idrac6 ipmi as stonith device on our 2 node cluster.
When one of the nodes power cords are being yanked out, both main and the
backup, the secondary node is not taking over as primary meaning the fencing
operation didn't happen successfully from secondary to the abruptly
22 matches
Mail list logo