[ClusterLabs] Clusvcadm -Z substitute in Pacemaker

2016-07-08 Thread jaspal singla
Hello Everyone,



I need little help, if anyone can give some pointers, it would help me a
lot.



In RHEL-7.x:



There is concept of pacemaker and when I use the below command to freeze my
resource group operation, it actually stops all of the resources associated
under the resource group.



# pcs cluster standby 

# pcs cluster unstandby 



Result:  This actually stops all of the resource group in that node
(ctm_service is one of the resource group, which gets stop including
database as well, it goes to MOUNT mode)





However; through clusvcadm command on RHEL-6.x, it doesn't stop the
ctm_service there and my database is in RW mode.



# clusvcadm -Z ctm_service

# clusvcadm -U ctm_service





So my concern here is - Freezing/unfreezing should not affect the status of
the group. Is there any way around to achieve the same in RHEL-7.x as well,
that was done with clusvcadm on RHEL 6?



Thanks

Jaspal
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] RES: Pacemaker and OCFS2 on stand alone mode

2016-07-08 Thread Carlos Xavier
Tank you very much to every one that tryed to help me

> 
> "Carlos Xavier"  writes:
> 
> > 1467918891 Is dlm missing from kernel? No misc devices found.
> > 1467918891 /sys/kernel/config/dlm/cluster/comms: opendir failed: 2
> > 1467918891 /sys/kernel/config/dlm/cluster/spaces: opendir failed: 2
> > 1467918891 No /sys/kernel/config, is configfs loaded?
> > 1467918891 shutdown
> 
> Try following the above hints:
> 
> modprobe configfs
> modprobe dlm
> mount -t configfs configfs /sys/kernel/config
> 

I tryed those tips, they helped to go ahead, but it wasn't enough to get the 
OCFS2 started on snad alone mode in order to recover
the data.

> and then start the control daemon again.  But this is pretty much what the 
> controld resource should do
> anyway.  The main question is why your cluster does not do it by itself.  If 
> you give up after all,
> try this:
> https://www.drbd.org/en/doc/users-guide-83/s-ocfs2-legacy
> --

I decided to make the hole install of another machine, to take the place of the 
burned one, just to recover de data.

Once again, many tanks to you.

Regards,
Carlos



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] system or ocf resource agent

2016-07-08 Thread Klaus Wenninger
On 07/08/2016 06:15 PM, Ken Gaillot wrote:
> On 07/08/2016 05:10 AM, Heiko Reimer wrote:
>> Hi,
>>
>> i am setting up new debian 8 ha cluster with drbd, corosync and
>> pacemaker with apache and mysql. In my old environment i had configured
>> resources with ocf resource agents. Now i have seen that there is
>> systemd. Which agent would you recommend?
>>
>>
>> Mit freundlichen Grüßen / Best regards
>>  
>> Heiko Reimer
>
> There's no one answer for all services. Some things to consider:
>
> * Only OCF agents can take parameters, so they may have additional
> capabilities that the systemd agent doesn't.
>
> * Only OCF agents can be used for globally unique clones, clones that
> require notifications, and multistate (master/slave) resources.
>
> * OCF agents are usually written to support a variety of
> OSes/distributions, while systemd unit files are often tailored to the
> particular system. This can give OCF an advantage if you have a variety
> of OSes and want the resource agent to behave as identically as possible
> on all of them, or it can give systemd an advantage if you have a
> homogeneous environment and want to use OS-specific facilities as much
> as possible.
>
> * A lot depends on the particular service. Is the OCF agent widely used
> and actively developed? If so, it is more likely to have better features
> and enhanced support for running in a cluster; if not, the systemd unit
> may be more up-to-date with recent changes in the underlying service.
>
> * Your Pacemaker version matters. Pacemaker added systemd resource
> support in 1.1.8, but there were significant issues until 1.1.13, and
> minor but useful fixes since then.
One of the more recent changes you might be interested in if
experiencing problem with shutdown order
when systemd-services are involved ...
https://github.com/ClusterLabs/pacemaker/commit/6aae8542abedc755b90c8c49aa5c429718fd12f1
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Doing reload right

2016-07-08 Thread Ken Gaillot
On 07/04/2016 07:13 AM, Ferenc Wágner wrote:
> Ken Gaillot  writes:
> 
>> Does anyone know of an RA that uses reload correctly?
> 
> My resource agents advertise a no-op reload action for handling their
> "private" meta attributes.  Meta in the sense that they are used by the
> resource agent when performing certain operations, not by the managed
> resource itself.  Which means they are trivially changeable online,
> without any resource operation whatsoever.
> 
>> Does anyone object to the (backward-incompatible) solution proposed
>> here?
> 
> I'm all for cleanups, but please keep an online migration path around.

Not sure what you mean by online ... the behavior would change when
Pacemaker was upgraded, so the node would already be out of the cluster
at that point. You would unmanage resources if desired, stop pacemaker
on the node, upgrade pacemaker, upgrade the RA, then start/manage again.

If you mean that you would like to use the same RA before and after the
upgrade, that would be doable. We could bump the crm feature set, which
gets passed to the RA as an environment variable. You could modify the
RA to handle both reload and reload-params, and if it's asked to reload,
check the feature set to decide which type of reload to do. You could
upgrade the RA anytime before the pacemaker upgrade.

In pseudo-code, the recommended way of supporting reload would become:

  reload_params() { ... }
  reload_service() { ... }

  if action is "reload-params" then
 reload_params()
  else if action is "reload"
 if crm_feature_set < X.Y.Z then
reload_params()
 else
reload_service()


Handling both "unique" and "reloadable" would be more complicated, but
that's inherent in the mismash of meaning unique has right now. I see
three approaches:

1. Use "unique" in its GUI sense and "reloadable" to indicate reloadable
parameters. This would be cleanest, but would not be useful with
pre-"reloadable" pacemaker.

2. Use both unique=0 and reloadable=1 to indicate reloadable parameters.
This sacrifices proper GUI hinting to keep compatibility with pre- and
post-"reloadable" pacemaker (the same sacrifice that has to be made now
to use reload correctly).

3. Dynamically modify the metadata according to the crm feature set,
using approach 1 with post-"reloadable" pacemaker and approach 2 with
pre-"reloadable" pacemaker. This is the most flexible but makes the code
more complicated. In pseudocode, it might look something like:

   if crm_feature_set < X.Y.Z then
  UNIQUE_TRUE=""
  UNIQUE_FALSE=""
  RELOADABLE_TRUE="unique=0"
  RELOADABLE_FALSE="unique=1"
   else
  UNIQUE_TRUE="unique=1"
  UNIQUE_FALSE="unique=0"
  RELOADABLE_TRUE="reloadable=1"
  RELOADABLE_FALSE="reloadable=0"

   meta_data() {
  ...
  
  ...
  
   }

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] system or ocf resource agent

2016-07-08 Thread Ken Gaillot
On 07/08/2016 05:10 AM, Heiko Reimer wrote:
> Hi,
> 
> i am setting up new debian 8 ha cluster with drbd, corosync and
> pacemaker with apache and mysql. In my old environment i had configured
> resources with ocf resource agents. Now i have seen that there is
> systemd. Which agent would you recommend?
> 
> 
> Mit freundlichen Grüßen / Best regards
>  
> Heiko Reimer


There's no one answer for all services. Some things to consider:

* Only OCF agents can take parameters, so they may have additional
capabilities that the systemd agent doesn't.

* Only OCF agents can be used for globally unique clones, clones that
require notifications, and multistate (master/slave) resources.

* OCF agents are usually written to support a variety of
OSes/distributions, while systemd unit files are often tailored to the
particular system. This can give OCF an advantage if you have a variety
of OSes and want the resource agent to behave as identically as possible
on all of them, or it can give systemd an advantage if you have a
homogeneous environment and want to use OS-specific facilities as much
as possible.

* A lot depends on the particular service. Is the OCF agent widely used
and actively developed? If so, it is more likely to have better features
and enhanced support for running in a cluster; if not, the systemd unit
may be more up-to-date with recent changes in the underlying service.

* Your Pacemaker version matters. Pacemaker added systemd resource
support in 1.1.8, but there were significant issues until 1.1.13, and
minor but useful fixes since then.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Pacemaker and OCFS2 on stand alone mode

2016-07-08 Thread Ferenc Wágner
"Carlos Xavier"  writes:

> 1467918891 Is dlm missing from kernel? No misc devices found.
> 1467918891 /sys/kernel/config/dlm/cluster/comms: opendir failed: 2
> 1467918891 /sys/kernel/config/dlm/cluster/spaces: opendir failed: 2
> 1467918891 No /sys/kernel/config, is configfs loaded?
> 1467918891 shutdown

Try following the above hints:

modprobe configfs
modprobe dlm
mount -t configfs configfs /sys/kernel/config

and then start the control daemon again.  But this is pretty much what
the controld resource should do anyway.  The main question is why your
cluster does not do it by itself.  If you give up after all, try this:
https://www.drbd.org/en/doc/users-guide-83/s-ocfs2-legacy
-- 
Feri

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] system or ocf resource agent

2016-07-08 Thread Heiko Reimer

Hi,

i am setting up new debian 8 ha cluster with drbd, corosync and 
pacemaker with apache and mysql. In my old environment i had configured
resources with ocf resource agents. Now i have seen that there is 
systemd. Which agent would you recommend?



Mit freundlichen Grüßen / Best regards
 
Heiko Reimer



--
Heiko Reimer
IT / Entwicklung

-
Sport-Tiedje GmbH
International Headquarters
Flensburger Str. 55
D-24837 Schleswig

Tel.: +49 - 4621- 4210-864
Fax.: +49 - 4621 - 4210 879
heiko.rei...@sport-tiedje.de
http://www.sport-tiedje.com
-

_
Diese Nachricht erhalten Sie im Namen der Sport-Tiedje Gruppe
Sport-Tiedje Head Office:
Sport-Tiedje GmbH
International Headquarters
Flensburger Str. 55
D-24837 Schleswig

Geschaeftsfuehrer / managing directors: Christian Grau, Sebastian Campmann, Dr. 
Bernhard Schenkel
Amtsgericht / local court Flensburg: HRB 1000 SL
Steuer-Nr.: 1529319096
UST-ID: DE813211547


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org