Re: [ClusterLabs] Antw: Re: [Slightly OT] OCFS2 over LVM

2015-08-25 Thread Digimer
On 25/08/15 04:45 AM, Ulrich Windl wrote:
 Digimer li...@alteeve.ca schrieb am 24.08.2015 um 18:20 in Nachricht
 55db4453.10...@alteeve.ca:
 [...]
 Using a pair of nodes with a traditional file system exported by NFS and
 made accessible by a floating (virtual) IP address gives you redundancy
 without incurring the complexity and performance overhead of cluster
 locking. Also, you won't need clvmd either. The trade-off through is
 that if/when the primary fails, the nfs daemon will appear to restart to
 the users and that may require a reconnection (not sure, I use nfs
 sparingly).
 
 But that's a cheap trick: You say don't provide HA-storage (CFS), but use 
 existing one (NFS). How do you build a HA-NFS server? You need another 
 cluster. Not everybody has that many nodes available.

DRBD in single-primary will do the job just fine. Recovery is simply a
matter of; fence - promote to primary - mount - start nfs - take
virtual IP, done.

Only 2-nodes needed. This is a common setup.

 Generally speaking, I recommend always avoiding cluster FSes unless
 they're really required. I say this as a person who uses gfs2 in every
 cluster I build, but I do so carefully and in limited uses. In my case,
 gfs2 backs ISOs and XML definition files for VMs, things that change
 rarely so cluster locking overhead is all but a non-issue, and I have to
 have DLM for clustered LVM anyway, so I've already incurred the
 complexity costs so hey, why not.

 -- 
 Digimer
 Papers and Projects: https://alteeve.ca/w/ 
 What if the cure for cancer is trapped in the mind of a person without
 access to education?

 ___
 Users mailing list: Users@clusterlabs.org 
 http://clusterlabs.org/mailman/listinfo/users 

 Project Home: http://www.clusterlabs.org 
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
 Bugs: http://bugs.clusterlabs.org 
 
 
 
 
 
 ___
 Users mailing list: Users@clusterlabs.org
 http://clusterlabs.org/mailman/listinfo/users
 
 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org
 


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Corosync GitHub vs. dev list

2015-08-25 Thread Ken Gaillot
On 08/25/2015 05:20 AM, Ferenc Wagner wrote:
 Hi,
 
 Since Corosync is hosted on GitHub, I wonder if it's enough to submit
 pull requests/issues/patch comments there to get the developers'
 attention, or should I also post to develop...@clusterlabs.org?

GitHub is good for patches, and when you want to reach just the corosync
developers. They'll get the usual github notifications.

The list is good for discussion, and reaches a broader audience
(developers of other cluster components, and advanced users who write
code for their clusters).

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: [Slightly OT] OCFS2 over LVM

2015-08-25 Thread Ulrich Windl
 Jorge Fábregas jorge.fabre...@gmail.com schrieb am 23.08.2015 um 20:13
in
Nachricht 55da0d4f.1080...@gmail.com:
 Hi,
 
 I'm still doing some tests on SLES 11 SP4  I was trying to run
 mkfs.ocfs2 against a logical volume (with all infrastructure
 ready: cLVM  DLM  o2cb) but it gives me errors while creating it.  If
 I run it against a raw device (no LVM) it works.
 
 Then I found this from an Oracle PDF:
 
 It is important to note that creating OCFS2 volumes on logical volumes
 (LVM) is not supported. This is due to the fact that logical volumes are
 not cluster aware and corruption of the OCFS2 file system may occur.

Of course you need cLVM! With cLVM it definitely worked up to SP3.

 
 Can anyone please confirm if indeed OCFS2 won't work on top of LVM as of
 today?  I found no mention of this in the HAE Guide (strange).
 
 Thanks!
 Jorge
 
 ___
 Users mailing list: Users@clusterlabs.org 
 http://clusterlabs.org/mailman/listinfo/users 
 
 Project Home: http://www.clusterlabs.org 
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
 Bugs: http://bugs.clusterlabs.org 




___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: Antw: Re: Antw: Re: Antw: Re: MySQL resource causes error 0_monitor_20000.

2015-08-25 Thread Ulrich Windl
 Kiwamu Okabe kiw...@debian.or.jp schrieb am 20.08.2015 um 18:14 in 
 Nachricht
caevx6dm2yafsnuubttufhcrzyyuzk863wiqicuwkdmjtfws...@mail.gmail.com:
 Hi,
 
 On Wed, Aug 19, 2015 at 5:03 PM, Kiwamu Okabe kiw...@debian.or.jp wrote:
 The resource-agents have no ocf-tester command.
 
 I updated pacemaker as 1.1.12-1.el6.
 And run ocf-tester that show following message:
 
 ```
 # ocf-tester -n mysql_repl -o binary=/usr/local/mysql/bin/mysqld_safe
 -o datadir=/data/mysql -o pid=/data/mysql/mysql.pid -o
 socket=/tmp/mysql.sock -o log=/data/mysql/centillion.db.err -o
 replication_user=repl -o replication_passwd=slavepass
 /usr/lib/ocf/resource.d/heartbeat/mysql
 Beginning tests for /usr/lib/ocf/resource.d/heartbeat/mysql...
 * rc=107: Demoting a start resource should not fail
 * rc=107: Demote failed
 Error signing on to the CIB service: Transport endpoint is not connected
 Aborting tests
 ```
 
 What are the 107 meaning?

The text behind is more important: There is either a problem in the RA, or in 
the RA configuration. You could also try  -v Be verbose while testing for 
testing to get more output. As the RA really doe the indicated operations, you 
may find more details in the syslogs.

Regards,
Ulrich


 
 Thank's,
 -- 
 Kiwamu Okabe at METASEPI DESIGN
 
 ___
 Users mailing list: Users@clusterlabs.org 
 http://clusterlabs.org/mailman/listinfo/users 
 
 Project Home: http://www.clusterlabs.org 
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
 Bugs: http://bugs.clusterlabs.org 





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: SLES 11 SP4 csync2

2015-08-25 Thread Ulrich Windl
 Jorge Fábregas jorge.fabre...@gmail.com schrieb am 22.08.2015 um 07:12
in
Nachricht 55d804a4.50...@gmail.com:
 Hi everyone,
 
 I'm trying out SLES 11 SP4 with the High-Availability Extension on two
 virtual machines.  I want to keep things simple  I have a question
 regarding the csync2 tool from SUSE.  Considering that:
 
 - I'll have just two nodes
 - I'll be using corosync without encryption (no authkey file)
 - I won't be using DRBD
 
 Do I really need the csync2 service? In order to bootstrap the cluster

IMHO it's handy. We always change the same node (if up) and sync from there.
The advantage is when you add more nodes (or files to sync). You can live
without, but be disciplined when making changes.

 I'll configure corosync.conf on the first node  then I'll manually
 transfer it to the 2nd node (and modify accordingly).  That's the only
 thing I can think of that I need to take care of (file-wise).  After
 that I'll use the crm shell  the Hawk web console.
 
 I guess my question is: does the crm shell or Hawk need the csync2 tool
 to function properly? Is there anything an admin could do through them
 that might require a file to be synced afterwards?
 
 Thanks!
 
 -- 
 Jorge
 
 ___
 Users mailing list: Users@clusterlabs.org 
 http://clusterlabs.org/mailman/listinfo/users 
 
 Project Home: http://www.clusterlabs.org 
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
 Bugs: http://bugs.clusterlabs.org 




___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Antw: Re: [ClusterLabs Developers] Resource Agent language discussion

2015-08-25 Thread Ulrich Windl
 Ulrich Windl ulrich.wi...@rz.uni-regensburg.de schrieb am 25.08.2015 um
08:59 in Nachricht 55dc2e6602a10001b...@gwsmtp1.uni-regensburg.de:
 Jehan-Guillaume de Rorthais j...@dalibo.com schrieb am 19.08.2015 um
 10:59 in
 Nachricht 20150819105900.24f85553@erg:
 
 [...]
[...]
 
 After users have set up their preference, the maintainer of the software 
 could
 add a work of obsolescence to the RA that lost in the users' vote...

s/work/word/

 [...]



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: [Slightly OT] OCFS2 over LVM

2015-08-25 Thread Ulrich Windl
 Digimer li...@alteeve.ca schrieb am 24.08.2015 um 18:20 in Nachricht
55db4453.10...@alteeve.ca:
[...]
 Using a pair of nodes with a traditional file system exported by NFS and
 made accessible by a floating (virtual) IP address gives you redundancy
 without incurring the complexity and performance overhead of cluster
 locking. Also, you won't need clvmd either. The trade-off through is
 that if/when the primary fails, the nfs daemon will appear to restart to
 the users and that may require a reconnection (not sure, I use nfs
 sparingly).

But that's a cheap trick: You say don't provide HA-storage (CFS), but use 
existing one (NFS). How do you build a HA-NFS server? You need another cluster. 
Not everybody has that many nodes available.

 
 Generally speaking, I recommend always avoiding cluster FSes unless
 they're really required. I say this as a person who uses gfs2 in every
 cluster I build, but I do so carefully and in limited uses. In my case,
 gfs2 backs ISOs and XML definition files for VMs, things that change
 rarely so cluster locking overhead is all but a non-issue, and I have to
 have DLM for clustered LVM anyway, so I've already incurred the
 complexity costs so hey, why not.
 
 -- 
 Digimer
 Papers and Projects: https://alteeve.ca/w/ 
 What if the cure for cancer is trapped in the mind of a person without
 access to education?
 
 ___
 Users mailing list: Users@clusterlabs.org 
 http://clusterlabs.org/mailman/listinfo/users 
 
 Project Home: http://www.clusterlabs.org 
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
 Bugs: http://bugs.clusterlabs.org 





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: SLES 11 SP4 csync2

2015-08-25 Thread Ulrich Windl
 Jorge Fábregas jorge.fabre...@gmail.com schrieb am 22.08.2015 um 19:07
in
Nachricht 55d8ac56.4020...@gmail.com:
 On 08/22/2015 01:38 AM, Andrei Borzenkov wrote:
 Wrong question :) Of course you can do everything manually. The real 
 question should be - will SUSE support installation done manually. If 
 you do not care about support - sure, you do not need it.
 
 That's a good point (SUSE support).  Ok, I played with the yast cluster
 module (for initial cluster configuration) and noticed that, apart from
 creating the corosync.conf file, it created:
 
 - /etc/syconfig/pacemaker
 - /etc/sysconfig/corosync
 
 ...so I must remind myself that this is not just Linux with
 pacemaker/corosync  friends.  It's all that *on SUSE* so, when in
 Rome, do as Romands do :)
 
 I'll set it up then, in order not to break the warranty.  The HAE guide
 also mentions about placing a call to csync2 in ~/.bash_logout which is
 nice (so you don't forget).

I wouldn't do that; neither recommend. I would sync when I'm ready. If
multiple people log in and out as root, you may have trouble...

 
 
 No, they manipulate CIB so this should be OK. But in real life there are 
 always more files that should be kept in sync between cluster nodes, 
 having tool to automate it is good.
 
 Got it.  Thanks Andrei!
 
 All the best,
 Jorge
 
 ___
 Users mailing list: Users@clusterlabs.org 
 http://clusterlabs.org/mailman/listinfo/users 
 
 Project Home: http://www.clusterlabs.org 
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
 Bugs: http://bugs.clusterlabs.org 




___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: [ClusterLabs Developers] Resource Agent language discussion

2015-08-25 Thread Ulrich Windl
 Jehan-Guillaume de Rorthais j...@dalibo.com schrieb am 19.08.2015 um
10:59 in
Nachricht 20150819105900.24f85553@erg:

[...]
 Because if both are included, then they will forevermore be answering the
 question “which one should I use?”.
 
 True.

I think the user base will answer this in terms of how many users get a RA do
what they expect it to do, and I'd favor that decision over some maintainer
decision whether this or that is better or worse.

After users have set up their preference, the maintainer of the software could
add a work of obsolescence to the RA that lost in the users' vote...
[...]

Regards,
Ulrich


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Cluster.conf

2015-08-25 Thread Christine Caulfield
On 25/08/15 14:14, Streeter, Michelle N wrote:
 I am using pcs but it does nothing with the cluster.conf file.   Also, I am 
 currently required to use rhel6.6.   
 
 I have not been able to find any documentation on what is required in the 
 cluster.conf file under the newer versions of pacemaker and I have not been 
 able to reduce my current version down enough to satisfy pacemaker and so 
 would you please provide an example of what is required in the cluster.conf 
 file?
 
 I don't think CMAN component can operate without that file (location
 possibly overridden with $COROSYNC_CLUSTER_CONFIG_FILE environment
 variable).  What distro, or at least commands to bring the cluster up
 do you use?
 
 We are only allowed to download from Red hat and I have both corosync and 
 pacemaker services set to on so they start at boot up.   It does not matter 
 if I stop all three services cman, corosync, and pacemaker and then start 
 corosync first and then pacemaker, if I have a cluster.conf file in place, it 
 fails to start.
 

We need to know more about what exactly you mean by 'failed to start'.
Actual error messages and the command you used to start the cluster
would be appreciated, along with any syslog messages.

pacemaker on RHEL-6 requires cman. if cman is failing to start then
that's a configuration error that we need to look into (and that
cluster.conf you posted is not enough for a valid cluster BTW - you need
fencing in there at least!).

If the cluster starts 'without cman' then I can only assume that
something is very strangely wrong on your system. What command do you
use in this scenario, and what do you class as 'started'? Again
messages, and logs would be helpful in diagnosing what's going on here,

Chrissie

 This is my current cluster.conf file which just failed.
 ?xml version=1.0?
 cluster name=CNAS
 clusternodes
 clusternode name=nas01
 /clusternode
 clusternode name=nas02
 /clusternode
 /clusternodes
 /cluster
 
 Michelle Streeter 
 ASC2 MCS - SDE/ACL/SDL/EDL OKC Software Engineer
 The Boeing Company
 
 Date: Mon, 24 Aug 2015 17:52:01 +
 From: Streeter, Michelle N michelle.n.stree...@boeing.com
 To: users@clusterlabs.org users@clusterlabs.org
 Subject: [ClusterLabs] Cluster.conf
 Message-ID:
   9a18847a77a9a14da7e0fd240efcafc2504...@xch-phx-501.sw.nos.boeing.com
 Content-Type: text/plain; charset=us-ascii
 
 If I have a cluster.conf file in /etc/cluster, my cluster will not start.   
 Pacemaker 1.1.11, Corosync 1.4.7, cman 3.0.12,  But if I do not have a 
 cluster.conf file then my cluster does start with my current configuration.   
 However, when I try to stop the cluster, it wont stop unless I have my 
 cluster.conf file in place.   How can I dump my cib to my cluster.conf file 
 so my cluster will start with the conf file in place?
 
 Michelle Streeter
 ASC2 MCS - SDE/ACL/SDL/EDL OKC Software Engineer
 The Boeing Company
 
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 http://clusterlabs.org/pipermail/users/attachments/20150824/f7abecda/attachment-0001.html
 
 --
 
 Message: 3
 Date: Mon, 24 Aug 2015 14:00:48 -0400
 From: Digimer li...@alteeve.ca
 To: Cluster Labs - All topics related to open-source clustering
   welcomedusers@clusterlabs.org
 Subject: Re: [ClusterLabs] Cluster.conf
 Message-ID: 55db5bd0.4010...@alteeve.ca
 Content-Type: text/plain; charset=windows-1252
 
 The cluster.conf is needed by cman, and in RHEL 6, pacemaker needs to
 use cman as the quorum provider. So you need a skeleton cluster.conf and
 it is different from cib.xml.
 
 If you use pcs/pcsd to setup pacemaker on RHEL 6.7, it should configure
 everything for you, so you should be able to go straight to setting up
 pacemaker and not worry about cman/corosync directly.
 
 digimer
 
 
 ___
 Users mailing list: Users@clusterlabs.org
 http://clusterlabs.org/mailman/listinfo/users
 
 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org
 


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org