Andrew,
Sorry, here it is:
[root@premium2 ~]# ls -la /var/lib/pacemaker/cib
total 204
drwxrwxr-x. 2 hacluster haclient 4096 Jul 21 22:04 .
drwxr-x---. 6 root root 4096 Jul 14 18:02 ..
-rw--- 1 hacluster root 232 Jul 14 18:17 cib-0.raw
-rw--- 1 hacluster root 32 Jul 14
On 16/09/13 16:53, Andreas Mock wrote:
> Hi all,
>
> I'm using (want to use) RHEL 6.4 fence_ipmilan for our IBM x3650 M4 (IMM).
> My problem is the following. In contrast to the documented behaviour
> a 'chassis power off' or a 'chassis power reset' is doing a soft reset as if
> you have pressed t
On 15/09/2013, at 3:14 AM, Stephen Marsh wrote:
> Hi all,
>
> I'm using Corosync 2.3.1 with Pacemaker 1.1.10 (final release) and DRBD 8.4.3.
>
> I've got a strange problem with the way Pacemaker handles DRBD resources that
> don't exist.
>
> I'm testing with this config:
>
> primitive vs1-1
On 16/09/2013, at 12:11 AM, ge...@riseup.net wrote:
> Hi all,
>
> I'm in the process of deploying a pacemaker cluster, running several xen
> vms, storage is done with drbd.
>
> Everything works like a charm, and now I just found the root cause (at
> least I believe) for the issue "device is sti
On 17/09/2013, at 3:42 AM, Саша Александров wrote:
> Hi, everyone!
>
> I have a pretty strange issue. When starting pacemaker, I get
>
> Sep 16 21:21:03 premium2 cib[27510]: notice: main: Using new config
> location: /var/lib/pacemaker/cib
> Sep 16 21:21:03 premium2 cib[27510]:error: cr
On 17/09/2013, at 4:01 AM, Errol Neal wrote:
> Hi. I'm trying to figure out EXACTLY what the process is for getting a
> cluster working on Ubuntu Raring.
> Most of the clusters I implemented required dlm-controld.pcmk, but I don't
> see this being shipped anymore in most distributions.
> I'v
Hi, everyone!
I have a pretty strange issue. When starting pacemaker, I get
Sep 16 21:21:03 premium2 cib[27510]: notice: main: Using new config
location: /var/lib/pacemaker/cib
Sep 16 21:21:03 premium2 cib[27510]:error: crm_is_writable:
/var/lib/pacemaker/cib must exist and be a directory
S
Hey Guys,
The OS am running is CentOS 6.4 (64bit) and I have disabled IPtables and
SeLinux.
My goal is to make Apache Tomcat as HA. As a first step thought of testing
with Apache.
My network setup is like this,
Node1 is connected to switch
Note2 is connected to switch.
My cluster.conf file is a
My /etc/hosts is like this,
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
10.30.2.98 test01.iopextech.com test01
10.30.2.99 test02.iopextech.com test02
and in my cluste
Le 16/09/2013 17:30, Gopalakrishnan N a écrit :
Hey Guys,
The OS am running is CentOS 6.4 (64bit) and I have disabled IPtables and
SeLinux.
My goal is to make Apache Tomcat as HA. As a first step thought of
testing with Apache.
My network setup is like this,
Node1 is connected to switch
Note2
Hi all,
I'm using (want to use) RHEL 6.4 fence_ipmilan for our IBM x3650 M4 (IMM).
My problem is the following. In contrast to the documented behaviour
a 'chassis power off' or a 'chassis power reset' is doing a soft reset as if
you have pressed the on-off-button of the server. That means th
On 16/09/13 11:40, Errol Neal wrote:
> On Mon, 09/16/2013 02:13 PM, Digimer wrote:
>> On 16/09/13 11:01, Errol Neal wrote:
>>> Hi. I'm trying to figure out EXACTLY what the process is for getting a
>>> cluster working on Ubuntu Raring.
>>> Most of the clusters I implemented required dlm-controld
On 16/09/13 11:01, Errol Neal wrote:
> Hi. I'm trying to figure out EXACTLY what the process is for getting a
> cluster working on Ubuntu Raring.
> Most of the clusters I implemented required dlm-controld.pcmk, but I don't
> see this being shipped anymore in most distributions.
> I've tried to
Hi. I'm trying to figure out EXACTLY what the process is for getting a cluster
working on Ubuntu Raring.
Most of the clusters I implemented required dlm-controld.pcmk, but I don't see
this being shipped anymore in most distributions.
I've tried to document a how-to for CentOS on my blog at ha-
Le 16/09/2013 14:18, Gopalakrishnan N a écrit :
Do I need to have a cross over cable between each node? Is it mandatory?
Nop it doesn't.
In your case, I'd check the network architecture and/or firewalling
regarding multicast. You probably either have wrong iptables and/or a
switch dropping yo
Hi,
tell us on which OS you want to install and run cman et. al.
Show us what you've done so far. (e.g. Communication paths,
IP addresses)
Best regards
Andreas Mock
Von: Gopalakrishnan N [mailto:gopalakrishnan...@gmail.com]
Gesendet: Montag, 16. September 2013 14:01
An: The Pa
Do I need to have a cross over cable between each node? Is it mandatory?
On Mon, Sep 16, 2013 at 8:01 PM, Gopalakrishnan N <
gopalakrishnan...@gmail.com> wrote:
> Again the when i restarted the pacemaker and cman not the nodes are not in
> online, back to square 1.
>
> node1 shows only node1 onl
Again the when i restarted the pacemaker and cman not the nodes are not in
online, back to square 1.
node1 shows only node1 online, and node2 says node2 online. I don't know
what's happening in the background...
Any advice would be appreciated..
Thanks.
On Mon, Sep 16, 2013 at 6:47 PM, Gopalak
Ok making a setup... facing small glitches.. with CMAN setup... let me
check..
On Wed, Sep 11, 2013 at 5:30 PM, Florian Crouzat
wrote:
> Le 09/09/2013 17:34, Gopalakrishnan N a écrit :
>
> Hi,
>>
>> Any tutorial to install pacemaker with Apache Tomcat...
>>
>> Regards,
>> Gopal
>>
>>
>
> Yes, f
Hi guys,
I got it, basically it tool some time to propogate and now two nodes are
showing online...
Thanks.
On Mon, Sep 16, 2013 at 6:39 PM, Gopalakrishnan N <
gopalakrishnan...@gmail.com> wrote:
> I have configured CMAN as per the link
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/h
I have configured CMAN as per the link
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html-single/Clusters_from_Scratch/index.html#_configuring_cman
but
when I type cman_tools nodes only one node is online even thought the
cluster.conf is propogated in other node as well.
what could be the
On 09/12/13 15:20, Lars Marowsky-Bree wrote:
> On 2013-09-12T16:56:35, Andrew Beekhof wrote:
>
>>> The most directly equivalent solution would be to number the per-node
>>> in-flight operations similar to what migration-threshold does. (I think
>>> we can safely continue to treat all resources as
Hi Lars, hi all,
we took the time and tested drbd 8.4.4-rc in our problematic scenario.
We were able to reproduce the promote error regularly with drbd 8.4.3.
After installing 8.4.4-rc we were not able to get this error any more.
So, concerning the changes made to get around the known race condi
23 matches
Mail list logo