The part of the logs that might be suspicious is where it says
"Cluster Coordinator is located at 0.0.0.0:11443".

I'm not totally sure, but I don't think it should have 0.0.0.0 for the
address there.

What do you have set for nifi.cluster.node.address on each node? If it
is not set can try setting it to the appropriate hostname for each
cluster?

I'd be interested to see your nifi.properties for each node if you are
able/willing to share that somehow.

On Wed, Oct 24, 2018 at 3:09 PM Viking K <cyber_v...@hotmail.com> wrote:
>
> Hi
> Don't know if this helps, but I run a 2 node cluster with 2 zookeepers.
> Did you do the zookeper myid assignment?
> Also have you set the flow election?
>
> I do it like this for my 2 instances,
>
> mkdir -p /nifi/state/zookeper/instance-1;echo 1 
> >/nifi/state/zookeper/instance-1/myid
> mkdir -p /nifi/state/zookeper/instance-2;echo 2 
> >/nifi/state/zookeper/instance-2/myid
>
> Then the cluster conf looks like this, its written in YML since I got a 
> wrapper but its pretty self explaining.
>
> test:
>     instances:
>         host1:
>             nifi.properties:
>                 nifi.cluster.node.address: 'host1.test.com'
>              zookeeper.properties:
>                 clientPort: '2181'
>                 dataDir: '/nifi/state/zookeper/instance-1'
>         host2:
>             nifi.properties:
>                 nifi.cluster.node.address: 'host2.test.com'
>
>             zookeeper.properties:
>                 clientPort: '2182'
>                 dataDir: '/nifi/state/zookeper/instance-2'
>
>     common_configuration:
>         nifi.properties:
>             nifi.cluster.is.node: 'true'
>             nifi.cluster.flow.election.max.candidates: '2'
>             nifi.zookeeper.connect.string: 
> 'host1.test.com:2181,host2.test.com:2182'
>             nifi.state.management.embedded.zookeeper.start: 'true'
>
>         bootstrap.conf:
>             java.arg.2: '-Xms10g'
>             java.arg.3: '-Xmx10g'
>             run.as: 'nifi'
>         zookeeper.properties:
>             server.1: 'host1.test.com:2888:3888'
>             server.2: 'host2.test.com:2889:3889'
>
> /Viking
> ________________________________
> From: Saip, Alexander (NIH/CC/BTRIS) [C] <alexander.s...@nih.gov>
> Sent: Wednesday, October 24, 2018 5:47 PM
> To: users@nifi.apache.org
> Subject: RE: NiFi fails on cluster nodes
>
>
> I did what you suggested. There aren’t any errors in the log, although here 
> is a warning:
>
>
>
> 2018-10-24 13:34:12,042 WARN [main] o.a.nifi.controller.StandardFlowService 
> Failed to connect to cluster due to: 
> org.apache.nifi.cluster.protocol.ProtocolException: Failed unmarshalling 
> 'CONNECTION_RESPONSE' protocol message from <host-2>/<host-2 IP 
> address>:11443 due to: java.net.SocketTimeoutException: Read timed out
>
> 2018-10-24 13:34:12,049 INFO [main] 
> o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster 
> Coordinator is located at 0.0.0.0:11443; will use this address for sending 
> heartbeat messages
>
> 2018-10-24 13:34:12,174 INFO [Process Cluster Protocol Request-2] 
> o.a.n.c.c.node.NodeClusterCoordinator Received Connection Request from 
> <host-2>:8008; responding with DataFlow that was elected
>
> 2018-10-24 13:34:12,175 INFO [Process Cluster Protocol Request-2] 
> o.a.n.c.c.node.NodeClusterCoordinator Status of <host:8008 changed from 
> NodeConnectionStatus[nodeId=<host-2>:8008, state=CONNECTING, updateId=61] to 
> NodeConnectionStatus[nodeId=<host-2>:8008, state=CONNECTING, updateId=63]
>
>
>
> Please let me know if you want to see other cluster related INFO type log 
> messages.
>
>
>
> -----Original Message-----
> From: Bryan Bende <bbe...@gmail.com>
> Sent: Wednesday, October 24, 2018 12:08 PM
> To: users@nifi.apache.org
> Subject: Re: NiFi fails on cluster nodes
>
>
>
> Is there anything interesting (errors/warnings) in nifi-app.log on host 2 
> during start up?
>
>
>
> Also, I'm not sure if this will do anything different, but you could try 
> clearing the ZK state dir to make sure all the info in ZK is starting fresh...
>
>
>
> - Shutdown both nodes
>
> - Remove the directory nifi/state/zookeeper/version-2 on host 1 (not the 
> whole ZK dir, just version-2)
>
> - Start nifi 1 and wait for it be up and running
>
> - Start nifi 2
>
>
>
> On Wed, Oct 24, 2018 at 11:18 AM Saip, Alexander (NIH/CC/BTRIS) [C] 
> <alexander.s...@nih.gov> wrote:
>
> >
>
> > Yes, that setting is the same for on both hosts. I attach the UI 
> > screenshots taken on those. Please note that host’s FQDNs have been removed.
>
> >
>
> >
>
> >
>
> > -----Original Message-----
>
> > From: Bryan Bende <bbe...@gmail.com>
>
> > Sent: Wednesday, October 24, 2018 9:25 AM
>
> > To: users@nifi.apache.org
>
> > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> >
>
> >
>
> > Many services can share a single ZooKeeper by segmenting their data under a 
> > specific root node.
>
> >
>
> >
>
> >
>
> > The root node is specified by nifi.zookeeper.root.node=/nifi so if those 
> > were different on each node then it would form separate clusters.
>
> >
>
> >
>
> >
>
> > Can you show screenshots of the cluster information from each node?
>
> >
>
> >
>
> >
>
> > May need to upload them somewhere and provide links here since attachments 
> > don't always make it through.
>
> >
>
> > On Wed, Oct 24, 2018 at 8:18 AM Saip, Alexander (NIH/CC/BTRIS) [C] 
> > <alexander.s...@nih.gov> wrote:
>
> >
>
> > >
>
> >
>
> > > The ZooKeeper related settings in the nifi.properties files on both hosts 
> > > are identical, with the exception of 
> > > nifi.state.management.embedded.zookeeper.start, which is ‘true’ on host-1 
> > > and ‘false’ on host-2. Moreover, if I shut down NiFi on host-1, it 
> > > crashes on host-2. Here is the message in the browser window:
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > Action cannot be performed because there is currently no Cluster 
> > > Coordinator elected. The request should be tried again after a moment, 
> > > after a Cluster Coordinator has been automatically elected.
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > I even went as far as commenting out the server.1  line in the 
> > > zookeeper.properties file on host-1 before restarting both NiFi 
> > > instances, which didn’t change the outcome.
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > When I look at the NiFi Cluster information in the UI on host-1, it shows 
> > > the status of the node “CONNECTED, PRIMARY, COORDINATOR”, whereas on 
> > > host-2 just “CONNECTED”. I don’t know if this tells you anything.
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > BTW, what does “a different location in the same ZK” mean?
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > -----Original Message-----
>
> >
>
> > > From: Bryan Bende <bbe...@gmail.com>
>
> >
>
> > > Sent: Tuesday, October 23, 2018 3:02 PM
>
> >
>
> > > To: users@nifi.apache.org
>
> >
>
> > > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > The only way I could see that happening is if the ZK config on the second 
> > > node pointed at a different ZK, or at a different location in the same ZK.
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > For example, if node 1 had:
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > nifi.zookeeper.connect.string=node-1:2181
>
> >
>
> > >
>
> >
>
> > > nifi.zookeeper.connect.timeout=3 secs
>
> >
>
> > >
>
> >
>
> > > nifi.zookeeper.session.timeout=3 secs
>
> >
>
> > >
>
> >
>
> > > nifi.zookeeper.root.node=/nifi
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > Then node 2 should have exactly the same thing.
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > If node 2 specified a different connect string, or a different root node, 
> > > then it wouldn't know about the other node.
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > On Tue, Oct 23, 2018 at 2:48 PM Saip, Alexander (NIH/CC/BTRIS) [C] 
> > > <alexander.s...@nih.gov> wrote:
>
> >
>
> > >
>
> >
>
> > > >
>
> >
>
> > >
>
> >
>
> > > > That's exactly the case.
>
> >
>
> > >
>
> >
>
> > > >
>
> >
>
> > >
>
> >
>
> > > > -----Original Message-----
>
> >
>
> > >
>
> >
>
> > > > From: Bryan Bende <bbe...@gmail.com>
>
> >
>
> > >
>
> >
>
> > > > Sent: Tuesday, October 23, 2018 2:44 PM
>
> >
>
> > >
>
> >
>
> > > > To: users@nifi.apache.org
>
> >
>
> > >
>
> >
>
> > > > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> > >
>
> >
>
> > > >
>
> >
>
> > >
>
> >
>
> > > > So you can get into each node's UI and they each show 1/1 for cluster 
> > > > nodes?
>
> >
>
> > >
>
> >
>
> > > >
>
> >
>
> > >
>
> >
>
> > > > It doesn't really make sense how the second node would form its own 
> > > > cluster.
>
> >
>
> > >
>
> >
>
> > > > On Tue, Oct 23, 2018 at 2:20 PM Saip, Alexander (NIH/CC/BTRIS) [C] 
> > > > <alexander.s...@nih.gov> wrote:
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > I copied over users.xml, authorizers.xml and authorizations.xml to 
> > > > > host-2, removed flow.xml.gz, and started NiFi there. Unfortunately, 
> > > > > for whatever reason, the nodes still don’t talk to each other, even 
> > > > > though both of them are connected to ZooKeeper on host-1. I still see 
> > > > > two separate clusters, one on host-1 with all the dataflows, and the 
> > > > > other, on host-2, without any of them. On the latter, the logs have 
> > > > > no mention of host-1 whatsoever, neither server name, nor IP address. 
> > > > > On host-1, nifi-app.log contains a few lines like the following:
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > 2018-10-23 13:44:43,628 INFO
>
> >
>
> > >
>
> >
>
> > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
>
> >
>
> > >
>
> >
>
> > > > > o.a.zookeeper.server.ZooKeeperServer Client attempting to
>
> >
>
> > > > > establish
>
> >
>
> > >
>
> >
>
> > > > > new session at /<host-2 IP address>:50412
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > 2018-10-23 13:44:43,629 INFO [SyncThread:0]
>
> >
>
> > >
>
> >
>
> > > > > o.a.zookeeper.server.ZooKeeperServer Established session
>
> >
>
> > >
>
> >
>
> > > > > 0x166a1d139590002 with negotiated timeout 4000 for client
>
> > > > > /<host-2
>
> >
>
> > >
>
> >
>
> > > > > IP
>
> >
>
> > >
>
> >
>
> > > > > address>:50412
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > I apologize for bugging you with all this, converting our
>
> >
>
> > > > > standalone
>
> >
>
> > >
>
> >
>
> > > > > NiFi instances into cluster nodes turned out to be much more
>
> >
>
> > >
>
> >
>
> > > > > challenging than we had anticipated…
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > -----Original Message-----
>
> >
>
> > >
>
> >
>
> > > > > From: Bryan Bende <bbe...@gmail.com>
>
> >
>
> > >
>
> >
>
> > > > > Sent: Tuesday, October 23, 2018 1:17 PM
>
> >
>
> > >
>
> >
>
> > > > > To: users@nifi.apache.org
>
> >
>
> > >
>
> >
>
> > > > > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > Probably easiest to copy the files over since you have other existing 
> > > > > users/policies and you know the first node is working.
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > On Tue, Oct 23, 2018 at 1:12 PM Saip, Alexander (NIH/CC/BTRIS) [C] 
> > > > > <alexander.s...@nih.gov> wrote:
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > Embarrassingly enough, there was a missing whitespace in the host 
> > > > > > DN in the users.xml file. Thank you so much for pointing me in the 
> > > > > > right direction! Now, in order to add another node, should I copy 
> > > > > > users.xml and authorizations.xml from the connected node to it, or 
> > > > > > remove them there instead?
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > -----Original Message-----
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > From: Bryan Bende <bbe...@gmail.com>
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > Sent: Tuesday, October 23, 2018 12:36 PM
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > To: users@nifi.apache.org
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > That means the user representing host-1 does not have permissions 
> > > > > > to proxy.
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > You can look in authorizations.xml on nifi-1 for a policy like:
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > <policy identifier="287edf48-da72-359b-8f61-da5d4c45a270"
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > resource="/proxy" action="W">
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >             <user
>
> >
>
> > >
>
> >
>
> > > > > > identifier="c22273fa-7ed3-38a9-8994-3ed5fea5d234"/>
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >         </policy>
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > That user identifier should point to a user in users.xml like:
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > <user identifier="c22273fa-7ed3-38a9-8994-3ed5fea5d234"
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > identity="CN=<host-1, redacted>, OU=Devices, OU=NIH, OU=HHS, O=U.S.
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > Government, C=US"/>
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > All of the user identities are case sensitive and white space 
> > > > > > sensitive so make sure whatever is in users.xml is exactly what is 
> > > > > > shown in the logs.
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > On Tue, Oct 23, 2018 at 12:28 PM Saip, Alexander (NIH/CC/BTRIS) [C] 
> > > > > > <alexander.s...@nih.gov> wrote:
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > Hi Bryan,
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > Yes, converting two standalone NiFi instances into a cluster is 
> > > > > > > exactly what we are trying to do. Here are the steps I went 
> > > > > > > through in this round:
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > ·         restored the original configuration files 
> > > > > > > (nifi.properties, users.xml, authorizers.xml and 
> > > > > > > authorizations.xml)
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > ·         restarted one instance in the standalone mode
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > ·         added two new node users in the NiFi web UI 
> > > > > > > (CN=<host-1, redacted>, OU=Devices, OU=NIH, OU=HHS, O=U.S. 
> > > > > > > Government, C=US and CN=<host-2, redacted>, OU=Devices, OU=NIH, 
> > > > > > > OU=HHS, O=U.S. Government, C=US)
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > ·         granted them the “proxy user requests” privileges
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > ·         edited the nifi.properties file 
> > > > > > > (nifi.state.management.embedded.zookeeper.start=true, 
> > > > > > > nifi.cluster.is.node=true, nifi.zookeeper.connect.string=<host-1, 
> > > > > > > redacted>:2181)
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > ·         restarted the node on host-1
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > On logging in, I see the cluster section of the dashboard showing 
> > > > > > > 1/1 as expected, although I’m unable to do anything there due to 
> > > > > > > errors like this:
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > Insufficient Permissions
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > Node <host-1, redacted>:8008 is unable to fulfill this request 
> > > > > > > due to: Untrusted proxy CN=<host-1, redacted>, OU=Devices, 
> > > > > > > OU=NIH, OU=HHS, O=U.S. Government, C=US Contact the system 
> > > > > > > administrator.
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > The nifi-user.log also contains
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > 2018-10-23 12:17:01,916 WARN [NiFi Web Server-224]
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > o.a.n.w.s.NiFiAuthenticationFilter Rejecting access to web api:
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > Untrusted proxy CN=<host-1, redacted>, OU=Devices, OU=NIH,
>
> >
>
> > >
>
> >
>
> > > > > > > OU=HHS,
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > O=U.S. Government, C=US
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > From your experience, what the most likely causes for this 
> > > > > > > exception?
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > Thank you,
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > Alexander
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > -----Original Message-----
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > From: Bryan Bende <bbe...@gmail.com>
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > Sent: Monday, October 22, 2018 1:25 PM
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > To: users@nifi.apache.org
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > > Yes, to further clarify what I meant...
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > >
>
> >
>
> > >
>
> >
>
> > > > >
>
> >
>
> > >
>
> >
>
> > > > > > >
>
> >

Reply via email to