[ClusterLabs] libqb 1.0.1 release

2016-11-24 Thread Christine Caulfield
I am very pleased to announce the 1.0.1 release of libqb

This is a bugfix release with mainly lots of small amendments.

Low: ipc_shm: fix superfluous NULL check
log: Don't overwrite valid tags
Low: further avoid magic in qblog.h by using named constants
Low: log: check for appropriate space when serializing a char
Low: sanitize import of  symbols
Low: sanitize import of  symbols
Low: further sanitize qbipc[cs].h public headers wrt. includes
Med: log_thread: logt_wthread_lock is vital for logging thread
Low: unix: new qb_sys_unlink_or_truncate{,_at} helpers
log: Add missing z,j, & t types to the logger
Med: rb: use new qb_rb_close_helper able to resort to file truncating
Low: log: check for appropriate space when serializing a char
API: introduce alternative, header-based versioning
API: header-based versioning: s/PATCH/MICRO
Low: explain mysterious lines in a public header (qblog.h)
tests: refactor test case defs using versatile add_tcase macro
tests: SIGSTOP cannot be caught, blocked, or ignored
defs: add wrappers over preprocessor operators
build: be more restrictive about QB_HAVE_ATTRIBUTE_SECTION
Add some Hurd support
build: use latest git-version-gen from gnulib (rev. 6118065)
build: persuade git-version-gen vMAJOR.MINOR tags just miss .0
tests: ensure verbose output on failure w/ more recent automake
tests: make clang-friendly (avoid using run-time VLAs)
CI: make travis use also clang compiler (for good measure)
low:fixed:Spelling error of failure in qbhdb.h
Fix typo: qblog.h: q{g -> b}_log_filter_ctl
docs: qbdefs.h: description must directly follow @file
maint: qb-blackbox man page should accompany the binary
Build: configure: do not check for unused "sched" functions
Maint: typo + unused functions checked in configure
tests: resources: check for proper names of leftover processes
doc: elaborate more on thread safety as it's not so pure
log: Remove check for HAVE_SCHED_GET_PRIORITY_MAX
tests: start stdlib failures injection effort with unlink{,at} + test
build: ensure check_SCRIPTS are distributed
build: ensure debug make flags are not derived when unsuitable
build: allow for git -> automatic COPR builds integration
doc: README: add a status badge+link for the COPR builds


Huge thanks you to all of the people who have contributed to this release.

Chrissie

The current release tarball is here:
https://github.com/ClusterLabs/libqb/releases/download/v1.0.1/libqb-1.0.1.tar.gz

The github repository is here:
https://github.com/ClusterLabs/libqb

Please report bugs and issues in bugzilla:
https://bugzilla.redhat.com

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: Set a node attribute for multiple nodes with one command

2016-11-24 Thread Kostiantyn Ponomarenko
Attribute dampening doesn't work for me also.
To test that I have a script:

attrd_updater -N node-0 -n my-attr --update false --delay 20
sleep 3
attrd_updater -N node-0 -n my-attr --update true
sleep 7
attrd_updater -N node-1 -n my-attr --update true

All my resources have this rule in Pacemaker config:

crm configure location res1-location-rule res1 \
rule 0: my-attr eq true \
rule -inf: my-attr ne true

On a working two-node cluster I remove "my-attr" from both nodes.
Then run my script. And all resources start on node-0.
Am I doing something wrong?
Or maybe my understanding of an attribute dampening is not correct?

My Pacemaker version is 1.1.13. (heh, not the last one, but it is what it
is ...)

Thank you,
Kostia

On Wed, Nov 23, 2016 at 7:27 PM, Kostiantyn Ponomarenko <
konstantin.ponomare...@gmail.com> wrote:

> Maybe I am doing something wrong, but I cannot set "status" section node
> attributes to a shadow cib, cluster applies them immediately.
> To try it out I do in a console:
>
> crm_shadow --create test
> crm_attribute --type nodes --node node-0 --name my-attribute --update
> 1 --lifetime=reboot
>
> And this attribute is set to the live cluster configuration immediately.
> What am I doing wrong?
>
> Thank you,
> Kostia
>
> On Tue, Nov 22, 2016 at 11:33 PM, Kostiantyn Ponomarenko <
> konstantin.ponomare...@gmail.com> wrote:
>
>> Ken,
>> Thank you for the explanation.
>> I will try this low-level way of shadow cib creation tomorrow.
>> PS: I will sleep much better with this excellent news/idea. =)
>>
>> Thank you,
>> Kostia
>>
>> On Tue, Nov 22, 2016 at 10:53 PM, Ken Gaillot 
>> wrote:
>>
>>> On 11/22/2016 04:39 AM, Kostiantyn Ponomarenko wrote:
>>> > Using "shadow cib" in crmsh looks like a good idea, but it doesn't work
>>> > with node attributes set into "status" section of Pacemaker config.
>>> > I wonder it it is possible to make it work that way.
>>>
>>> Forgot to mention -- the shadow CIB is probably the best way to do this.
>>> I don't know if there's a way to do it in crmsh, but you can use it with
>>> the low-level commands crm_shadow and crm_attribute --lifetime=reboot.
>>>
>>> > Ken,
>>> >>> start dampening timer
>>> > Could you please elaborate more on this. I don't get how I can set this
>>> > timer.
>>> > Do I need to set this timer for each node?
>>> >
>>> >
>>> > Thank you,
>>> > Kostia
>>> >
>>> > On Mon, Nov 21, 2016 at 9:30 AM, Ulrich Windl
>>> > >> > > wrote:
>>> >
>>> > >>> Ken Gaillot mailto:kgail...@redhat.com>>
>>> > schrieb am 18.11.2016 um 16:17 in Nachricht
>>> > >> > >:
>>> > > On 11/18/2016 08:55 AM, Kostiantyn Ponomarenko wrote:
>>> > >> Hi folks,
>>> > >>
>>> > >> Is there a way to set a node attribute to the "status" section
>>> for few
>>> > >> nodes at the same time?
>>> > >>
>>> > >> In my case there is a node attribute which allows some
>>> resources to
>>> > >> start in the cluster if it is set.
>>> > >> If I set this node attribute for say two nodes in a way - one
>>> and then
>>> > >> another, than these resources are not distributed equally
>>> between these
>>> > >> two nodes. That because Pacemaker picks the first node to with
>>> this
>>> > >> attribute is set and immediately starts all allowed resources
>>> on it. And
>>> > >> this is not the behavior i would like to get.
>>> > >>
>>> > >> Thank you,
>>> > >> Kostia
>>> > >
>>> > > Not that I know of, but it would be a good feature to add to
>>> > > attrd_updater and/or crm_attribute.
>>> >
>>> > With crm (shell) you don't have transactions for node attributes,
>>> > but for the configuration. So if you add a location restriction
>>> > preventing any resources on your nodes, then enable the nodes, and
>>> > then delete the location restrictions in one transaction, you might
>>> > get what you want. It's not elegant, but itt ill do.
>>> >
>>> > To the crm shell maintainer: Is is difficult to build transactions
>>> > to node status changes? The problem I see is this: For
>>> configuration
>>> > you always have transactions (requiring "commit), but for nodes you
>>> > traditionally have non (effects are immediate). So you'd need a
>>> > thing like "start transaction" which requires a "commit" or some
>>> > kind of abort later.
>>> >
>>> > I also don't know whether a "shadow CIB" would help for the
>>> original
>>> > problem.
>>> >
>>> > Ulrich
>>> >
>>> > >
>>> > > You can probably hack it with a dampening value of a few
>>> seconds. If
>>> > > your rule checks for a particular value of the attribute, set
>>> all the
>>> > > nodes to a different value first, which will write that value and
>>> > start
>>> > > the dampening timer. Then set all the attributes to the desired
>>> value,
>>> >   

Re: [ClusterLabs] OS Patching Process

2016-11-24 Thread Toni Tschampke
We recently did an upgrade for our cluster nodes from Wheezy to Jessie. 
To reduce possible problems with drbd we first updated the kernel to the 
backport version, which is the same as the stable kernel in Jessie.
After that drbd versions matched while one node was on whezzy and one 
node already was on Jessie.


While holding the cluster in maintenance we could safely upgrade the 
second node and then going through the different configuration/syntax 
changes from older corosync/pacemaker versions to the backport versions 
from Jessie.


Since the version jump was big from corosync 1 & pacemaker 1.1.7 to 
corosync 2 & pacemaker 1.1.15 there was quite some work dealing with 
upgraded configurations but cluster works again as expected.


-Toni

--
Mit freundlichen Grüßen

Toni Tschampke | t...@halle.it
bcs kommunikationslösungen
Inh. Dipl. Ing. Carsten Burkhardt
Harz 51 | 06108 Halle (Saale) | Germany
tel +49 345 29849-0 | fax +49 345 29849-22
www.b-c-s.de | www.halle.it | www.wivewa.de


EINFACH ADRESSEN, TELEFONATE UND DOKUMENTE VERWALTEN - MIT WIVEWA -
IHREM WISSENSVERWALTER FUER IHREN BETRIEB!

Weitere Informationen erhalten Sie unter www.wivewa.de

Am 22.11.2016 um 17:35 schrieb Jason A Ramsey:

Can anyone recommend a bulletproof process for OS patching a pacemaker
cluster that manages a drbd mirror (with LVM on top of the drbd and luns
defined for an iscsi target cluster if that matters)? Any time I’ve
tried to mess with the cluster, it seems like I manage to corrupt my
drbd filesystem, and now that I have actual data on the thing, that’s
kind of a scary proposition. Thanks in advance!



--



*[ jR ]*



  /there is no path to greatness; greatness is the path/



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] @ClusterLabs/devel COPR with new libqb (Was: libqb 1.0.1 release)

2016-11-24 Thread Jan Pokorný
On 24/11/16 10:42 +, Christine Caulfield wrote:
> I am very pleased to announce the 1.0.1 release of libqb

For instant tryout on Fedora/EL-based distros, there is already
a habitual COPR build.  But this time around, I'd like to introduce
some advancements in the process...

* * *

First, we now have a dedicated ClusterLabs group established in COPR,
and so far the only, devel, repository underneath, see:

https://copr.fedorainfracloud.org/coprs/g/ClusterLabs/devel/

The page hopefully states clearly what to expect, it's by no mean
intended to eclipse fine tuned downstream packages[*].  The packages
are provided AS ARE and the distros themselves have no liabilities,
so please do not file bugs at downstream trackers -- any feedback
at upstream level is still appreciated (as detailed), though.

[*] that being said, Fedora is receiving an update soonish

* * *

Second, new packages are generated once new push of changesets
occurs at respective upstream repositories, so it's always at
one's discretion whether to pick particular tagged version of
the component, or whichever else (usually the newest one).

So to update strictly to 1.0.1 version of libqb from here and
supposing you have dnf available + your distro is directly covered
with the builds, you would have to do as root:

  # dnf copr enable @ClusterLabs/devel
  # dnf update libqb-1.0.1-1$(rpm -E %dist)

as mere "dnf update libqb" would currently update even higher,
up to 1.0.1-1.2.d03b7 (2 commits pass the 1.0.1 version)
as of writing this email.

In other words, not specifying the particular version will provide
you with the latest greatest version, which is only useful if you
want to push living on the bleeding edge to the extreme (and this
COPR setup is hence a means of "continuous delivery" to shout a first
buzzword here).  It's good to be aware of this.

* * *

[now especially for developers ML readers]

Third, the coverage of the ClusterLabs-associated packages is
going to grow.  So far, there's pacemaker in the pipeline[**].
There's also an immediate benefit for developers of these packages,
as the cross-dependencies are primarily satisfied within the same
COPR repository, which means that here, latest development version
of pacemaker will get built against the latest version of libqb at
that moment, and thanks to the pacemaker's unit tests (as hook in
%check scriptlet when building the RPM package), there's also
realy a notion of integration testing (finally a "continous
integration" in a proper sense, IMHO; the other term to mention here).

That being said, if you work on a fellow project and want it to join
this club (and you are not a priori against Fedora affiliation as that
requires you obtaining an account in Fedora Account System), please
contact me off-list and we'll work it out.

[**] https://github.com/ClusterLabs/pacemaker/pull/1182

* * *

Hope you'll find this useful.

-- 
Jan (Poki)


pgpMYsOp3LX0q.pgp
Description: PGP signature
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] OS Patching Process

2016-11-24 Thread Dmitri Maziuk

On 2016-11-24 10:41, Toni Tschampke wrote:

We recently did an upgrade for our cluster nodes from Wheezy to Jessie.


IIRC it's the MIT CS joke that they have clusters whose uptime goes way 
back past the manufacturing date of any/every piece of hardware they're 
running on. They aren't linux-ha clusters but there's no reason why that 
shouldn't be doable with linux-ha.


Dima


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] [cluster Labs] what is the different between standby and move command to move resource between nodes

2016-11-24 Thread Omar Jaber
Hi all,

I want to ask what is the different between  "pcs cluster standby node"

and "pcs resource move resource1 example-node2"   commands that use to move the 
resource








___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [cluster Labs] what is the different between standby and move command to move resource between nodes

2016-11-24 Thread Kostiantyn Ponomarenko
The only thing that comes to my mind is that "standby" prevents all
resources from running on a node, whereas you can achieve the same with
"move" but it needs to be used for each resources. Also with "move" you
specify a node where you want a resource to be moved to.

On Nov 24, 2016 10:18 PM, "Omar Jaber"  wrote:

> Hi all,
>
> I want to ask what is the different between  "pcs cluster standby *node*"
>
> and "pcs resource move resource1 example-node2"   commands that use to move 
> the resource
>
>
>
>
>
>
>
>
>
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org