I’ve just tried to replicate but without success. Using a RPG with RAW transfer 
protocol.

If you want I can try removing the additional lines in my nifi.properties (the 
ones for http.port and host that I added which appeared to get this working?

Regards
Conrad


On 21/10/2016, 15:24, "Joe Witt" <[email protected]> wrote:

    Whoa there.... can you turn on debug logging in conf/logback.xml by adding
    
       <logger name="org.apache.nifi.remote.SocketRemoteSiteListener "
    level="DEBUG"/>
    
    Can you submit a JIRA for the log output with the stacktrace for those
    NPEs please.
    
    Thanks
    Joe
    
    On Fri, Oct 21, 2016 at 10:21 AM, Conrad Crampton
    <[email protected]> wrote:
    > Hi,
    >
    > I realise this may be getting boring for most but…
    >
    > I didn’t find any resolution to the upgrade so I have cleanly installed
    > v1.0.0 and oddly experienced the same issue with RPGs in that although the
    > RPGs didn’t show any errors (in so much as they had no warning on them and
    > reported that S2S was secure) the errors reported were about not being 
able
    > to determine other nodes in cluster.
    >
    > This is a section of log that showed an error that I don’t think I saw
    > before but only including here in case one of you fine folks need it for 
any
    > reason…
    >
    >
    >
    > ERROR [Site-to-Site Worker Thread-194]
    > o.a.nifi.remote.SocketRemoteSiteListener Unable to communicate with remote
    > instance Peer[url=nifi://nifinode3-cm1.mis.local:57289]
    > 
(SocketFlowFileServerProtocol[CommsID=04674c10-7351-443f-99c8-bffc59d650a7])
    > due to java.lang.NullPointerException; closing connection
    >
    > 2016-10-21 12:19:11,401 ERROR [Site-to-Site Worker Thread-195]
    > o.a.nifi.remote.SocketRemoteSiteListener Unable to communicate with remote
    > instance Peer[url=nifi://nifinode1-cm1.mis.local:35831]
    > 
(SocketFlowFileServerProtocol[CommsID=97eb2f1c-f3dd-4924-89c9-d294bb4037f5])
    > due to java.lang.NullPointerException; closing connection
    >
    > 2016-10-21 12:19:11,455 ERROR [Site-to-Site Worker Thread-196]
    > o.a.nifi.remote.SocketRemoteSiteListener Unable to communicate with remote
    > instance Peer[url=nifi://nifinode5-cm1.mis.local:59093]
    > 
(SocketFlowFileServerProtocol[CommsID=61e129ca-2e21-477a-8201-16b905e5beb6])
    > due to java.lang.NullPointerException; closing connection
    >
    > 2016-10-21 12:19:11,462 ERROR [Site-to-Site Worker Thread-197]
    > o.a.nifi.remote.SocketRemoteSiteListener Unable to communicate with remote
    > instance Peer[url=nifi://nifinode6-cm1.mis.local:37275]
    > 
(SocketFlowFileServerProtocol[CommsID=48ec62f4-a9ba-45a7-a149-4892d0193819])
    > due to java.lang.NullPointerException; closing connection
    >
    > 2016-10-21 12:19:11,470 ERROR [Site-to-Site Worker Thread-198]
    > o.a.nifi.remote.SocketRemoteSiteListener Unable to communicate with remote
    > instance Peer[url=nifi://nifinode4-cm1.mis.local:57745]
    > 
(SocketFlowFileServerProtocol[CommsID=266459a8-7a9b-44bd-8047-b5ba4d9b67ec])
    > due to java.lang.NullPointerException; closing connection
    >
    >
    >
    > in my naivety this suggests a problem with the socket (RAW) connection
    > protocol. The relevant section for S2S connection in my nifi.properties is
    >
    > nifi.remote.input.socket.host=nifinode6-cm1.mis.local // the number
    > different for each node obviously
    >
    > nifi.remote.input.socket.port=10443
    >
    > nifi.remote.input.secure=true
    >
    > nifi.remote.input.http.enabled=false
    >
    > nifi.remote.input.http.transaction.ttl=30 sec
    >
    >
    >
    > the errors suggest that the port specified here aren’t being used, but 
some
    > random ports are being used instead. Of course this is complete 
supposition
    > and probably a red herring.
    >
    >
    >
    > Anyway, I updated my nifi.properties to
    >
    > nifi.remote.input.socket.host=nifinode6-cm1.mis.local
    >
    > nifi.remote.input.http.host=nifinode6-cm1.mis.local
    >
    > nifi.remote.input.socket.port=10443
    >
    > nifi.remote.input.http.port=11443
    >
    > nifi.remote.input.secure=true
    >
    > nifi.remote.input.http.enabled=true
    >
    > nifi.remote.input.http.transaction.ttl=30 sec
    >
    >
    >
    > and used HTTP for my RPG and is now working ok.
    >
    >
    >
    > The test harness for this is
    >
    >
    >
    > GenerateFlowFile->RGP(test input port)
    >
    > InputPort(test input)->LogAttribute
    >
    >
    >
    > Regards,
    >
    > Conrad
    >
    >
    >
    >
    >
    >
    >
    > From: Conrad Crampton <[email protected]>
    > Date: Friday, 21 October 2016 at 08:18
    >
    >
    > To: "[email protected]" <[email protected]>
    > Subject: Re: Upgrade 0.6.1 to 1.0.0 problems with Remote Process Groups
    >
    >
    >
    > Hi,
    >
    > Yes, the access policy for the input port at the root level of my 
workspace
    > has S2S access policy (receive data via site to site) for all server nodes
    > (I have all my nodes in one group).
    >
    > For the next level down in my workspace (I have other process groups from 
my
    > main ‘root’ page in the UI space to organise the flows), I have input 
ports
    > on the next level down of flows which when I check the access policies on
    > that for S2S, the receive and send data via site-to-site options are 
greyed
    > out and if I try to override the policy, they still are. I don’t know if
    > this is important, but from reading the post below, the issue that the
    > access policies looks to address is different from my issue. The RPG has a
    > locked padlock and says site to site is secure. I don’t have any warning
    > triangles on the RPG itself, but I have the aforementioned warnings in my
    > logs.
    >
    >
    >
    > I’m going to abandon this now as I can’t get it to work.
    >
    >
    >
    > I’m going to try a fresh install and go with that – and have to recreate 
all
    > my flows where necessary. I’m moving to a new model of data ingestion 
anyway
    > so isn’t as catastrophic as it might be.
    >
    >
    >
    > Thanks for the help.
    >
    > Conrad
    >
    >
    >
    > From: Andy LoPresto <[email protected]>
    > Reply-To: "[email protected]" <[email protected]>
    > Date: Thursday, 20 October 2016 at 17:31
    > To: "[email protected]" <[email protected]>
    > Subject: Re: Upgrade 0.6.1 to 1.0.0 problems with Remote Process Groups
    >
    >
    >
    > * PGP Signed by an unknown key
    >
    > Conrad,
    >
    >
    >
    > For the site-to-site did you follow the instructions here [1]? Each node
    > needs to be added as a user in order to make the connections.
    >
    >
    >
    > [1]
    > 
http://bryanbende.com/development/2016/08/30/apache-nifi-1.0.0-secure-site-to-site
    >
    >
    >
    > Andy LoPresto
    >
    > [email protected]
    >
    > [email protected]
    >
    > PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
    >
    >
    >
    > On Oct 20, 2016, at 7:36 AM, Conrad Crampton <[email protected]>
    > wrote:
    >
    >
    >
    > Ok,
    >
    > So I tried everything suggested so far to no avail unfortunately.
    >
    >
    >
    > So what I have done is to create all new certs etc. using the tookit.
    > Updated my existing authoriszed-users.xml to have to match the full cert
    > distinguished names CN=server.name, OU=NIFI etc.
    >
    >
    >
    > Recreated all my remote process groups to not reference the original NCM 
as
    > that still wouldn’t work – after a complete new install (upgrade).
    >
    >
    >
    > So now what I have is a six node cluster using original data/worker nodes
    > and they are part of the cluster – all appears to be working ie. I can log
    > into the UI (nice by the way ;-) on each server. There are no SSL 
handshake
    > errors etc. BUT the RPG (newly created) still don’t appear to be working. 
I
    > am getting
    >
    >
    >
    > 11:34:24 GMTWARNINGe19ccf8e-0157-1000-0000-000063bfd9c0
    >
    > nifi6-cm1.local:9443org.apache.nifi.remote.client.PeerSelector@782af623
    > Unable to refresh Remote Group's peers due to Unable to communicate with
    > remote NiFi cluster in order to determine which nodes exist in the remote
    > cluster
    >
    > 11:34:25 GMTWARNINGe19ccf8e-0157-1000-0000-000063bfd9c0
    >
    > nifi1-cm1.local:9443org.apache.nifi.remote.client.PeerSelector@3a547274
    > Unable to refresh Remote Group's peers due to Unable to communicate with
    > remote NiFi cluster in order to determine which nodes exist in the remote
    > cluster
    >
    > 11:34:25 GMTWARNINGe1990203-0157-1000-ffff-ffff9ff40dc0
    >
    > nifi2-cm1.local:9443org.apache.nifi.remote.client.PeerSelector@54c2df1
    > Unable to refresh Remote Group's peers due to Unable to communicate with
    > remote NiFi cluster in order to determine which nodes exist in the remote
    > cluster
    >
    > 11:34:25 GMTWARNINGe1990203-0157-1000-ffff-ffff9ff40dc0
    >
    > nifi5-cm1.local:9443org.apache.nifi.remote.client.PeerSelector@50d59f3c
    > Unable to refresh Remote Group's peers due to Unable to communicate with
    > remote NiFi cluster in order to determine which nodes exist in the remote
    > cluster
    >
    > 11:34:26 GMTWARNINGe19ccf8e-0157-1000-0000-000063bfd9c0
    >
    > nifi2-cm1.local:9443org.apache.nifi.remote.client.PeerSelector@97c92ef
    > Unable to refresh Remote Group's peers due to Unable to communicate with
    > remote NiFi cluster in order to determine which nodes exist in the remote
    > cluster
    >
    > 11:34:26 GMTWARNINGe1990203-0157-1000-ffff-ffff9ff40dc0
    >
    > nifi6-cm1.local:9443org.apache.nifi.remote.client.PeerSelector@70663037
    > Unable to refresh Remote Group's peers due to Unable to communicate with
    > remote NiFi cluster in order to determine which nodes exist in the remote
    > cluster
    >
    > 11:34:27 GMTWARNINGe1990203-0157-1000-ffff-ffff9ff40dc0
    >
    > nifi4-cm1.local:9443org.apache.nifi.remote.client.PeerSelector@3c040426
    > Unable to refresh Remote Group's peers due to Unable to communicate with
    > remote NiFi cluster in order to determine which nodes exist in the remote
    > cluster
    >
    >
    >
    > I can telnet from server to server on both https (UI) port and S2S port.
    >
    > I am really at a loss as to what to do now.
    >
    >
    >
    > Data is queuing up in my input processors with nowhere to go.
    >
    > Do I have to do something radical here to get this working like stopping
    > everything, clearing out all the queues then starting up again??? I really
    > don’t want to do this obviously but I am getting nowhere on this – two 
days
    > of frustration with nothing to show for it.
    >
    >
    >
    > Any more suggestions please??
    >
    > Thanks for your patience.
    >
    > Conrad
    >
    >
    >
    >
    >
    > From: Andy LoPresto <[email protected]>
    > Reply-To: "[email protected]" <[email protected]>
    > Date: Wednesday, 19 October 2016 at 18:24
    > To: "[email protected]" <[email protected]>
    > Subject: Re: Upgrade 0.6.1 to 1.0.0 problems with Remote Process Groups
    >
    >
    >
    >> Old Signed by an unknown key
    >
    > Hi Conrad,
    >
    >
    >
    > Bryan is correct that changing the certificates (and the encapsulating
    > keystores and truststores) will not affect any data held in the nodes.
    >
    >
    >
    > Regenerating everything using the TLS toolkit should hopefully not be too
    > challenging, but I am also curious as to why you are getting these 
handshake
    > exceptions now. As Bryan pointed out, adding the following line to
    > bootstrap.conf will provide substantial additional log output which should
    > help trace the issue.
    >
    >
    >
    > java.arg.15=-Djavax.net.debug=ssl,handshake
    >
    >
    >
    > You can also imitate the node connecting to the (previous) NCM via this
    > command:
    >
    >
    >
    > $ openssl s_client -connect <host:port> -debug -state -cert
    > <path_to_your_cert.pem> -key <path_to_your_key.pem> -CAfile
    > <path_to_your_CA_cert.pem>
    >
    >
    >
    > Where:
    >
    >
    >
    > <host:port> = the hostname and port of the “NCM”
    > <path_to_your_cert.pem> = the public key used to identify the “node” (can 
be
    > exported from the node keystore [1])
    > <path_to_your_key.pem> = the private key used to identify the “node” (can 
be
    > exported from the node keystore via 2 step process)
    > <path_to_your_CA_cert.pem> = the public key used to sign the “NCM”
    > certificate (could be a 3rd party like Verisign or DigiCert, or an 
internal
    > organization CA if you have one)
    >
    >
    >
    > If you’ve already regenerated everything and it works, that’s fine. But if
    > you have the time to try and investigate the old certs, we are interested
    > and prepared to help. Thanks.
    >
    >
    >
    > [1] https://security.stackexchange.com/a/66865/16485
    >
    >
    >
    > Andy LoPresto
    >
    > [email protected]
    >
    > [email protected]
    >
    > PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
    >
    >
    >
    > On Oct 19, 2016, at 1:03 PM, Bryan Bende <[email protected]> wrote:
    >
    >
    >
    > That is definitely weird that it is only an issue on the node that used to
    > be the NCM.
    >
    > Might be worth double checking the keystore and truststore of that one 
node
    > and make sure it has what you would expect, and also double check
    > nifi.properties compared to the others to see if anything seems different.
    >
    >
    >
    > Changing all of the keystores, truststores, etc should be fine from a data
    > perspective...
    >
    >
    >
    > If you decide to go that route it would probably be easiest to start back
    > over from a security perspective, meaning:
    >
    > - Stop all the nodes and delete the users.xml and authorizations.xml from
    > all nodes
    >
    > - Configure authorizers.xml with the appropriate initial admin (or legacy
    > file) and node identities based on the new certs
    >
    > - Ensure authorizers.xml is the same on all nodes
    >
    > - Then restart everything
    >
    >
    >
    > Alternatively, you might be able to manually add users for all of the new
    > certs before shutting everything down and give them the appropriate
    > policies, then restart everything, but requires more manual work to get
    > everything lined up.
    >
    >
    >
    >
    >
    > On Wed, Oct 19, 2016 at 11:52 AM, Conrad Crampton
    > <[email protected]> wrote:
    >
    > Hi,
    >
    > As a plan for tomorrow – I have generated new keystores, truststores, 
client
    > certts  etc. for all nodes in my cluster using the
    >
    >
    >
    > From: Bryan Bende <[email protected]>
    > Reply-To: "[email protected]" <[email protected]>
    > Date: Wednesday, 19 October 2016 at 15:33
    >
    >
    > To: "[email protected]" <[email protected]>
    > Subject: Re: Upgrade 0.6.1 to 1.0.0 problems with Remote Process Groups
    >
    >
    >
    >
    >
    > Trying to think of things to check here...
    >
    >
    >
    > Does every node have nifi.remote.input.secure=true in nifi.properties and
    > the URL in the RPG is an https URL?
    >
    >
    >
    > On Wed, Oct 19, 2016 at 10:25 AM, Conrad Crampton
    > <[email protected]> wrote:
    >
    > One other thing…
    >
    > The RPGs have an unlocked padlock on them saying S2S is not secure.
    >
    > Conrad
    >
    >
    >
    > From: Bryan Bende <[email protected]>
    > Reply-To: "[email protected]" <[email protected]>
    > Date: Wednesday, 19 October 2016 at 15:20
    > To: "[email protected]" <[email protected]>
    > Subject: Re: Upgrade 0.6.1 to 1.0.0 problems with Remote Process Groups
    >
    >
    >
    > Ok that does seem like a TLS/SSL issue...
    >
    >
    >
    > Is this a single cluster doing site-to-site to itself?
    >
    >
    >
    > On Wed, Oct 19, 2016 at 10:06 AM, Joe Witt <[email protected]> wrote:
    >
    > thanks conrad - did get it.  Bryan is being more helpful that I so I
    > went silent :-)
    >
    > On Wed, Oct 19, 2016 at 10:02 AM, Conrad Crampton
    >
    > <[email protected]> wrote:
    >> Hi Joe,
    >>     Yep,
    >>     Tried removing the RPG that referenced the NCM and adding new one 
with
    >> one of the nifis as url.
    >>     That sort of worked, but kept getting errors about the NCM not being
    >> available for the ports and therefore couldn’t actually enable the port I
    >> needed to for that RPG.
    >>     Thanks
    >>     Conrad
    >>
    >> (sending again as don’t know if the stupid header ‘spoofed’ is stopping
    >> getting though – apologies if already sent)
    >>
    >>     On 19/10/2016, 14:12, "Joe Witt" <[email protected]> wrote:
    >>
    >>         Conrad,
    >>
    >>         For s2s now you can just point at any of the nodes in the 
cluster.
    >>         Have you tried changing the URL or removing and adding new RPG
    >>         entries?
    >>
    >>         Thanks
    >>         Joe
    >>
    >>         On Wed, Oct 19, 2016 at 8:38 AM, Conrad Crampton
    >>         <[email protected]> wrote:
    >>         > Hi,
    >>         >
    >>         > I have finally taken the plunge to upgrade my cluster from 
0.6.1
    >> to 1.0.0.
    >>         >
    >>         > 6 nodes with a NCM.
    >>         >
    >>         > With the removal of NCM in 1.0.0 I believe I now have an issue
    >> where none of
    >>         > my Remote Process Groups work as they previously did because
    >> they were
    >>         > configured to connect to the NCM (as the RPG url) which now
    >> doesn’t exist.
    >>         >
    >>         > I have tried converting my NCM to a node but whilst I can get 
it
    >> running
    >>         > (sort of) when I try and connect to the cluster I get something
    >> like this in
    >>         > my logs…
    >>         >
    >>         >
    >>         >
    >>         > 2016-10-19 13:14:44,109 ERROR [main]
    >> o.a.nifi.controller.StandardFlowService
    >>         > Failed to load flow from cluster due to:
    >>         > org.apache.nifi.controller.UninheritableFlowException: Failed 
to
    >> connect
    >>         > node to cluster because local flow is different than cluster
    >> flow.
    >>         >
    >>         > org.apache.nifi.controller.UninheritableFlowException: Failed 
to
    >> connect
    >>         > node to cluster because local flow is different than cluster
    >> flow.
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:879)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:493)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> org.apache.nifi.web.server.JettyServer.start(JettyServer.java:746)
    >>         > [nifi-jetty-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at org.apache.nifi.NiFi.<init>(NiFi.java:152)
    >>         > [nifi-runtime-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at org.apache.nifi.NiFi.main(NiFi.java:243)
    >>         > [nifi-runtime-1.0.0.jar:1.0.0]
    >>         >
    >>         > Caused by:
    >> org.apache.nifi.controller.UninheritableFlowException: Proposed
    >>         > Authorizer is not inheritable by the flow controller because of
    >> Authorizer
    >>         > differences: Proposed Authorizations do not match current
    >> Authorizations
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:252)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1435)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:83)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:671)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:857)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         ... 4 common frames omitted
    >>         >
    >>         > 2016-10-19 13:14:44,414 ERROR [main]
    >> o.a.n.c.c.node.NodeClusterCoordinator
    >>         > Event Reported for ncm-cm1.local:9090 -- Node disconnected from
    >>         > cluster due to
    >> org.apache.nifi.controller.UninheritableFlowException: Failed
    >>         > to connect node to cluster because local flow is different than
    >> cluster
    >>         > flow.
    >>         >
    >>         > 2016-10-19 13:14:44,420 ERROR [Shutdown Cluster Coordinator]
    >>         > org.apache.nifi.NiFi An Unknown Error Occurred in Thread
    >> Thread[Shutdown
    >>         > Cluster Coordinator,5,main]: java.lang.NullPointerException
    >>         >
    >>         > 2016-10-19 13:14:44,423 ERROR [Shutdown Cluster Coordinator]
    >>         > org.apache.nifi.NiFi
    >>         >
    >>         > java.lang.NullPointerException: null
    >>         >
    >>         >         at
    >>         >
    >> 
java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011)
    >>         > ~[na:1.8.0_51]
    >>         >
    >>         >         at
    >>         >
    >> java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006)
    >>         > ~[na:1.8.0_51]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.updateNodeStatus(NodeClusterCoordinator.java:570)
    >>         > ~[nifi-framework-cluster-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.shutdown(NodeClusterCoordinator.java:119)
    >>         > ~[nifi-framework-cluster-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowService$1.run(StandardFlowService.java:330)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_51]
    >>         >
    >>         > 2016-10-19 13:14:44,448 WARN [main]
    >> o.a.n.c.l.e.CuratorLeaderElectionManager
    >>         > Failed to close Leader Selector for Cluster Coordinator
    >>         >
    >>         > java.lang.IllegalStateException: Already closed or has not been
    >> started
    >>         >
    >>         >         at
    >>         >
    >> com.google.common.base.Preconditions.checkState(Preconditions.java:173)
    >>         > ~[guava-18.0.jar:na]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.curator.framework.recipes.leader.LeaderSelector.close(LeaderSelector.java:270)
    >>         > ~[curator-recipes-2.11.0.jar:na]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager.stop(CuratorLeaderElectionManager.java:159)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.FlowController.shutdown(FlowController.java:1303)
    >>         > [nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowService.stop(StandardFlowService.java:339)
    >>         > [nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> org.apache.nifi.web.server.JettyServer.start(JettyServer.java:753)
    >>         > [nifi-jetty-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at org.apache.nifi.NiFi.<init>(NiFi.java:152)
    >>         > [nifi-runtime-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at org.apache.nifi.NiFi.main(NiFi.java:243)
    >>         > [nifi-runtime-1.0.0.jar:1.0.0]
    >>         >
    >>         > 2016-10-19 13:14:45,062 WARN [Cluster Socket Listener]
    >>         > org.apache.nifi.io.socket.SocketListener Failed to communicate
    >> with Unknown
    >>         > Host due to java.net.SocketException: Socket closed
    >>         >
    >>         > java.net.SocketException: Socket closed
    >>         >
    >>         >         at java.net.PlainSocketImpl.socketAccept(Native Method)
    >>         > ~[na:1.8.0_51]
    >>         >
    >>         >         at
    >>         >
    >> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:404)
    >>         > ~[na:1.8.0_51]
    >>         >
    >>         >         at
    >> java.net.ServerSocket.implAccept(ServerSocket.java:545)
    >>         > ~[na:1.8.0_51]
    >>         >
    >>         >         at
    >>         >
    >> sun.security.ssl.SSLServerSocketImpl.accept(SSLServerSocketImpl.java:348)
    >>         > ~[na:1.8.0_51]
    >>         >
    >>         >         at
    >>         >
    >> org.apache.nifi.io.socket.SocketListener$2.run(SocketListener.java:112)
    >>         > ~[nifi-socket-utils-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
    >>         >
    >>         > 2016-10-19 13:14:45,064 WARN [main]
    >> org.apache.nifi.web.server.JettyServer
    >>         > Failed to start web server... shutting down.
    >>         >
    >>         > java.lang.Exception: Unable to load flow due to:
    >> java.io.IOException:
    >>         > org.apache.nifi.controller.UninheritableFlowException: Failed 
to
    >> connect
    >>         > node to cluster because local flow is different than cluster
    >> flow.
    >>         >
    >>         >         at
    >>         >
    >> org.apache.nifi.web.server.JettyServer.start(JettyServer.java:755)
    >>         > ~[nifi-jetty-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at org.apache.nifi.NiFi.<init>(NiFi.java:152)
    >>         > [nifi-runtime-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at org.apache.nifi.NiFi.main(NiFi.java:243)
    >>         > [nifi-runtime-1.0.0.jar:1.0.0]
    >>         >
    >>         > Caused by: java.io.IOException:
    >>         > org.apache.nifi.controller.UninheritableFlowException: Failed 
to
    >> connect
    >>         > node to cluster because local flow is different than cluster
    >> flow.
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:497)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> org.apache.nifi.web.server.JettyServer.start(JettyServer.java:746)
    >>         > ~[nifi-jetty-1.0.0.jar:1.0.0]
    >>         >
    >>         >         ... 2 common frames omitted
    >>         >
    >>         > Caused by:
    >> org.apache.nifi.controller.UninheritableFlowException: Failed to
    >>         > connect node to cluster because local flow is different than
    >> cluster flow.
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:879)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:493)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         ... 3 common frames omitted
    >>         >
    >>         > Caused by:
    >> org.apache.nifi.controller.UninheritableFlowException: Proposed
    >>         > Authorizer is not inheritable by the flow controller because of
    >> Authorizer
    >>         > differences: Proposed Authorizations do not match current
    >> Authorizations
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:252)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1435)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:83)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:671)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         at
    >>         >
    >> 
org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:857)
    >>         > ~[nifi-framework-core-1.0.0.jar:1.0.0]
    >>         >
    >>         >         ... 4 common frames omitted
    >>         >
    >>         > [root@ncm-cm1 logs]#
    >>         >
    >>         >
    >>         >
    >>         > I don’t know if the ‘Proposed Authorizer is not inheritable…’
    >> exception is
    >>         > part of the problem too.
    >>         >
    >>         > The docs weren’t very clear on whether (when upgrading and 
using
    >> the legacy
    >>         > support of the authorized-user.xml path required the nodes to 
be
    >> also added
    >>         > to the authorizers.xml.
    >>         >
    >>         > I did add them in the end as various attempts to get the 
cluster
    >> up and
    >>         > running without them failed (as each server didn’t seem to have
    >> rights to do
    >>         > anything.
    >>         >
    >>         >
    >>         >
    >>         > I have a lot of RPG in my work flows as I am ingesting many
    >> syslog data
    >>         > sources and this was the recommended pattern to distribute the
    >> data
    >>         > (listensyslog…run on primary, output to port (RPG), pick up in
    >> rest of data
    >>         > flow),
    >>         >
    >>         >
    >>         >
    >>         > Any suggestions on where to start trying to get this working?
    >>         >
    >>         > I’ve tried creating a new RPG on one on the nifis and 
connecting
    >> the
    >>         > syslog to that which sort of worked but then I have a bunch of
    >> other errors
    >>         > when trying to enable the ports to do with not being able to
    >> connect to
    >>         > (what was) the NCM.
    >>         >
    >>         >
    >>         >
    >>         > Thanks
    >>         >
    >>         > Conrad
    >>         >
    >>         >
    >>         >
    >>         > SecureData, combating cyber threats
    >>         >
    >>         > ________________________________
    >>         >
    >>         > The information contained in this message or any of its
    >> attachments may be
    >>         > privileged and confidential and intended for the exclusive use
    >> of the
    >>         > intended recipient. If you are not the intended recipient any
    >> disclosure,
    >>         > reproduction, distribution or other dissemination or use of 
this
    >>         > communications is strictly prohibited. The views expressed in
    >> this email are
    >>         > those of the individual and not necessarily of SecureData 
Europe
    >> Ltd. Any
    >>         > prices quoted are only valid if followed up by a formal written
    >> quote.
    >>         >
    >>         > SecureData Europe Limited. Registered in England & Wales
    >> 04365896.
    >>         > Registered Address: SecureData House, Hermitage Court, 
Hermitage
    >> Lane,
    >>         > Maidstone, Kent, ME16 9NT
    >>
    >>
    >>          ***This email originated outside SecureData***
    >>
    >>         Click
    >> 
https://www.mailcontrol.com/sr/tAj77!!uP0XGX2PQPOmvUu5zZAYN1Mos55ZMH65vS49VoLnJlQAkvDtaSciXa9lO25LWvxYjTGeVGm43FW9a3A==
    >> to report this email as spam.
    >>
    >>
    >>
    >>
    >
    >
    >
    >
    >
    >
    >
    >
    >
    > * Unknown Key
    > * 0x2F7DEF69
    >
    >
    >
    >
    >
    > * Unknown Key
    > * 0x2F7DEF69
    >
    >
    

Reply via email to