[ClusterLabs] Error when linking to libqb in shared library

2018-02-11 Thread Kristoffer Grönlund
Hi everyone,

(and especially the libqb developers)

I started hacking on a python library written in C which links to
pacemaker, and so to libqb as well, but I'm encountering a strange
problem which I don't know how to solve.

When I try to import the library in python, I see this error:

--- command ---
PYTHONPATH='/home/krig/projects/work/libpacemakerclient/build/python' 
/usr/bin/python3 
/home/krig/projects/python-pacemaker/build/../python/clienttest.py
--- stderr ---
python3: utils.c:66: common: Assertion `"implicit callsite section is 
observable, otherwise target's and/or libqb's build is at fault, preventing 
reliable logging" && work_s1 != NULL && work_s2 != NULL' failed.
---

This appears to be coming from the following libqb macro:

https://github.com/ClusterLabs/libqb/blob/master/include/qb/qblog.h#L352

There is a long comment above the macro which if nothing else tells me
that I'm not the first person to have issues with it, but it doesn't
really tell me what I'm doing wrong...

Does anyone know what the issue is, and if so, what I could do to
resolve it?

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Issues with DB2 HADR Resource Agent

2018-02-11 Thread Ondrej Famera
On 02/01/2018 07:24 PM, Dileep V Nair wrote:
> Thanks Ondrej for the response. I have set the PEER_WINDOWto 1000 which
> I guess is a reasonable value. What I am noticing is it does not wait
> for the PEER_WINDOW. Before that itself the DB goes into a
> REMOTE_CATCHUP_PENDING state and Pacemaker give an Error saying a DB in
> STANDBY/REMOTE_CATCHUP_PENDING/DISCONNECTED can never be promoted.
> 
> 
> Regards,
> 
> *Dileep V Nair*

Hi Dileep,

sorry for later response. The DB2 should not get into the
'REMOTE_CATCHUP' phase or the DB2 resource agent will indeed not
promote. From my experience it usually gets into that state when the DB2
on standby was restarted during or after PEER_WINDOW timeout.

When the primary DB2 fails then standby should end up in some state that
would match the one on line 770 of DB2 resource agent and the promote
operation is attempted.

  770  STANDBY/*PEER/DISCONNECTED|Standby/DisconnectedPeer)

https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/db2#L770

The DB2 on standby can get restarted when the 'promote' operation times
out, so you can try increasing the 'promote' timeout to something higher
if this was the case.

So if you see that DB2 was restarted after Primary failed, increase the
promote timeout. If DB2 was not restarted then question is why DB2 has
decided to change the status in this way.

Let me know if above helped.

-- 
Ondrej Faměra
@Red Hat
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org