Hello,

Instead of implementing another lock within NiFi, I suggest
investigating the reason why the primary node changed.
NiFi uses Zookeeper election to designate a node to be the primary node.
If a primary node's Zookeeper session timed out, Zookeeper elects
another node to take over the primary node role.

I suspect there were some issues for the ex-primary node to miss
sending a heart beat to Zookeeper within configured timeout (4 secs by
default, 2000 tickTime x 2). That can happen NW issue, Java VM pause
due to GC or Virtual machine pause ... etc.

If you are using NiFi's embedded Zookeeper to coordinate your cluster,
I recommend using an external Zookeeper cluster for stability.

Thanks,
Koji
On Wed, Nov 28, 2018 at 1:04 AM Bronislav Jitnikov <shadi...@gmail.com> wrote:
>
> I have small cluster with 3 instances of NiFi, and found some problem, as I
> think.
> Processor QueryDatabaseTable set to work on PrimaryNode only and Concurent
> tasks set to 1. Run Schedule set to large value (something like 20
> minutes), so I expect only one execution at a time. While query is
> executed, primary node changed and new Task started on new primary node. So
> I see two ways to resolve this problem:
> 1. Create some sort of lock on QueryDatabaseTable (maybe custom proc that
> lock run across the cluster StateManager)
> 2. Add some check in  connectableTask.invoke() (Better for me because I
> have similar problems with get data from REST).
>
> May be I miss something: So any help and ideas would be appreciated.
>
> Bronislav Zhitnikov
>
> PS: and sorry for my bad English.

Reply via email to