Nifi Cluster setup on AKS

2021-06-21 Thread SANDANAM, PAUL
Hi,

We are setting up NiFi cluster over Azure Kubernetes Cluster, mainly for 
auto scaling.
For the cluster setup, the list of all pods should be provided in Node 
Identities in authorizers.xml file. Since pods are created dynamically, it 
shall not be able to mention the Identities.

   NiFi is providing another option "Node Groups" (The typical use for this is 
when nodes are dynamically added/removed from the cluster), but there are not 
enough details on how to create and configure the group. Look forward support, 
kindly share some document/procedure for the same.

Thanks in advance,
Paul


Re: NiFi Cluster setup

2020-01-26 Thread Andy LoPresto
Anurag,

There is extensive documentation on the NiFi site under the User Guide [1] and 
Admin Guide [2]. Pierre has also written a number of articles around setting up 
a cluster [3] and monitoring status [4]. 


[1] https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#monitoring 

[2] 
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#clustering
 

[3] https://pierrevillard.com/2016/08/13/apache-nifi-1-0-0-cluster-setup/ 

[4] https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/ 


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Jan 25, 2020, at 6:29 AM, Anurag Sharma  wrote:
> 
> Hi All, 
> 
> Just getting started with NiFi. Could you please pass on resources about best 
> practices around cluster setup and monitoring. 
> 
> Regards
> Anurag
> 
> This message may contain confidential and/or privileged information. If you 
> are not the addressee or authorized to receive this for the addressee, you 
> must not use, copy, disclose, or take any action based on this message or any 
> information herein.  If you have received this message in error, please 
> advise the sender immediately by reply e-mail and delete this message.  The 
> opinion expressed in this mail is that of the sender and do not necessarily 
> reflect that of Quicko Technosoft Labs Private Limited. Thank you for your 
> co-operation.



NiFi Cluster setup

2020-01-25 Thread Anurag Sharma
Hi All,

Just getting started with NiFi. Could you please pass on resources about
best practices around cluster setup and monitoring.

Regards
Anurag

-- 

This message may contain confidential and/or privileged information. If 
you are not the addressee or authorized to receive this for the addressee, 
you must not use, copy, disclose, or take any action based on this message 
or any information herein.  If you have received this message in error, 
please advise the sender immediately by reply e-mail and delete this 
message.  The opinion expressed in this mail is that of the sender and do 
not necessarily reflect that of Quicko Technosoft Labs Private Limited. 
Thank you for your co-operation.


Re: Nifi cluster setup - Safe mode error

2015-10-11 Thread Chakrader Dewaragatla
Thanks, it worked.
In my earlier setup first started slave marked it self as primary node (as said 
in instructions), here I have to manually set. Also, error message is not 
clear. Thanks for looking into the tickets .

-Chakri

From: Joe Witt mailto:joe.w...@gmail.com>>
Reply-To: "users@nifi.apache.org<mailto:users@nifi.apache.org>" 
mailto:users@nifi.apache.org>>
Date: Sunday, October 11, 2015 at 5:38 AM
To: "users@nifi.apache.org<mailto:users@nifi.apache.org>" 
mailto:users@nifi.apache.org>>
Subject: Re: Nifi cluster setup - Safe mode error


...on startup one is chosen automatically.  But there are scenarios where as an 
admin you have to manually tell it how to behave.  We agree you should not have 
to and there are some tickets and a feature proposal covering our plans to 
solve this.

On Oct 11, 2015 8:20 AM, "Corey Flowers" 
mailto:cflow...@onyxpoint.com>> wrote:
Good morning! While you are running in a cluster you need to have a primary 
node selected. You can do this by opening the cluster icon in the upper right 
hand corner of your graph. You will see a list of your servers with ribbon 
icons on the right hand side. When you click the ribbon you have selected a 
primary node. This will allow you to change the graph.



Sent from my iPhone

On Oct 11, 2015, at 3:33 AM, Chakrader Dewaragatla 
mailto:chakrader.dewaraga...@lifelock.com>> 
wrote:

Hi – I have new nifi cluster setup with NCM and a slave node. Configuration was 
pretty seamless, when I try to create a processor, it spit following error. 
Cluster dashboard says all connected nodes as 1 and no errors shown. (By the 
way, I have two nodes has different java versions. 1.7 and 1.8)

Cluster is unable to service request to change flow: Received a mutable request 
[PUT -- 
https://ncm.example.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966<https://chakri.bityota.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966>]
 while in safe mode

Following detailed error.


2015-10-11 00:22:17,424 DEBUG [NiFi Web Server-73] 
c.s.j.spi.container.ContainerResponse Mapped exception to response: 409 
(Conflict)
org.apache.nifi.cluster.manager.exception.SafeModeMutableRequestException: 
Received a mutable request [PUT -- 
https://ncm.example.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966<https://chakri.bityota.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966>]
 while in safe mode
at 
org.apache.nifi.cluster.manager.impl.WebClusterManager.applyRequest(WebClusterManager.java:2079)
 ~[nifi-framework-cluster-0.3.0.jar:0.3.0]
at 
org.apache.nifi.cluster.manager.impl.WebClusterManager.applyRequest(WebClusterManager.java:2063)
 ~[nifi-framework-cluster-0.3.0.jar:0.3.0]
at 
org.apache.nifi.web.api.ProcessGroupResource.createProcessGroupReference(ProcessGroupResource.java:982)
 ~[classes/:na]
at 
org.apache.nifi.web.api.ProcessGroupResource.createProcessGroupReference(ProcessGroupResource.java:909)
 ~[classes/:na]
at 
org.apache.nifi.web.api.ProcessGroupResource$$FastClassBySpringCGLIB$$3adbdda6.invoke()
 ~[spring-core-4.1.6.RELEASE.jar:na]
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) 
~[spring-core-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717)
 ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
 ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at 
org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:64)
 ~[spring-security-core-3.2.7.RELEASE.jar:3.2.7.RELEASE]
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
 ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at 
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653)
 ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at 
org.apache.nifi.web.api.ProcessGroupResource$$EnhancerBySpringCGLIB$$e8bd6ff5.createProcessGroupReference()
 ~[spring-core-4.1.6.RELEASE.jar:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_45]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[na:1.8.0_45]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_45]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_45]
at 
com.sun.jersey.spi.container.JavaMethodInvokerF

Re: Nifi cluster setup - Safe mode error

2015-10-11 Thread Joe Witt
...on startup one is chosen automatically.  But there are scenarios where
as an admin you have to manually tell it how to behave.  We agree you
should not have to and there are some tickets and a feature proposal
covering our plans to solve this.
On Oct 11, 2015 8:20 AM, "Corey Flowers"  wrote:

> Good morning! While you are running in a cluster you need to have a
> primary node selected. You can do this by opening the cluster icon in the
> upper right hand corner of your graph. You will see a list of your servers
> with ribbon icons on the right hand side. When you click the ribbon you
> have selected a primary node. This will allow you to change the graph.
>
>
>
> Sent from my iPhone
>
> On Oct 11, 2015, at 3:33 AM, Chakrader Dewaragatla <
> chakrader.dewaraga...@lifelock.com> wrote:
>
> Hi – I have new nifi cluster setup with NCM and a slave node.
> Configuration was pretty seamless, when I try to create a processor, it
> spit following error. Cluster dashboard says all connected nodes as 1 and
> no errors shown. (By the way, I have two nodes has different java versions.
> 1.7 and 1.8)
>
> Cluster is unable to service request to change flow: Received a mutable
> request [PUT --
> https://ncm.example.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966
> <https://chakri.bityota.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966>]
> while in safe mode
>
> Following detailed error.
>
>
> 2015-10-11 00:22:17,424 DEBUG [NiFi Web Server-73]
> c.s.j.spi.container.ContainerResponse Mapped exception to response: 409
> (Conflict)
> org.apache.nifi.cluster.manager.exception.SafeModeMutableRequestException:
> Received a mutable request [PUT --
> https://ncm.example.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966
> <https://chakri.bityota.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966>]
> while in safe mode
> at
> org.apache.nifi.cluster.manager.impl.WebClusterManager.applyRequest(WebClusterManager.java:2079)
> ~[nifi-framework-cluster-0.3.0.jar:0.3.0]
> at
> org.apache.nifi.cluster.manager.impl.WebClusterManager.applyRequest(WebClusterManager.java:2063)
> ~[nifi-framework-cluster-0.3.0.jar:0.3.0]
> at
> org.apache.nifi.web.api.ProcessGroupResource.createProcessGroupReference(ProcessGroupResource.java:982)
> ~[classes/:na]
> at
> org.apache.nifi.web.api.ProcessGroupResource.createProcessGroupReference(ProcessGroupResource.java:909)
> ~[classes/:na]
> at
> org.apache.nifi.web.api.ProcessGroupResource$$FastClassBySpringCGLIB$$3adbdda6.invoke()
> ~[spring-core-4.1.6.RELEASE.jar:na]
> at
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> ~[spring-core-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at
> org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:64)
> ~[spring-security-core-3.2.7.RELEASE.jar:3.2.7.RELEASE]
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at
> org.apache.nifi.web.api.ProcessGroupResource$$EnhancerBySpringCGLIB$$e8bd6ff5.createProcessGroupReference()
> ~[spring-core-4.1.6.RELEASE.jar:na]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.8.0_45]
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> ~[na:1.8.0_45]
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.8.0_45]
> at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_45]
> at
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> ~[jersey-server-1.19.jar:1.19]
> at
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> ~[jersey-server-1.19.jar:1.19]
>

Re: Nifi cluster setup - Safe mode error

2015-10-11 Thread Corey Flowers
Good morning! While you are running in a cluster you need to have a primary
node selected. You can do this by opening the cluster icon in the upper
right hand corner of your graph. You will see a list of your servers with
ribbon icons on the right hand side. When you click the ribbon you have
selected a primary node. This will allow you to change the graph.



Sent from my iPhone

On Oct 11, 2015, at 3:33 AM, Chakrader Dewaragatla <
chakrader.dewaraga...@lifelock.com> wrote:

Hi – I have new nifi cluster setup with NCM and a slave node. Configuration
was pretty seamless, when I try to create a processor, it spit following
error. Cluster dashboard says all connected nodes as 1 and no errors shown.
(By the way, I have two nodes has different java versions. 1.7 and 1.8)

Cluster is unable to service request to change flow: Received a mutable
request [PUT --
https://ncm.example.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966
<https://chakri.bityota.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966>]
while in safe mode

Following detailed error.


2015-10-11 00:22:17,424 DEBUG [NiFi Web Server-73]
c.s.j.spi.container.ContainerResponse Mapped exception to response: 409
(Conflict)
org.apache.nifi.cluster.manager.exception.SafeModeMutableRequestException:
Received a mutable request [PUT --
https://ncm.example.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966
<https://chakri.bityota.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966>]
while in safe mode
at
org.apache.nifi.cluster.manager.impl.WebClusterManager.applyRequest(WebClusterManager.java:2079)
~[nifi-framework-cluster-0.3.0.jar:0.3.0]
at
org.apache.nifi.cluster.manager.impl.WebClusterManager.applyRequest(WebClusterManager.java:2063)
~[nifi-framework-cluster-0.3.0.jar:0.3.0]
at
org.apache.nifi.web.api.ProcessGroupResource.createProcessGroupReference(ProcessGroupResource.java:982)
~[classes/:na]
at
org.apache.nifi.web.api.ProcessGroupResource.createProcessGroupReference(ProcessGroupResource.java:909)
~[classes/:na]
at
org.apache.nifi.web.api.ProcessGroupResource$$FastClassBySpringCGLIB$$3adbdda6.invoke()
~[spring-core-4.1.6.RELEASE.jar:na]
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
~[spring-core-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717)
~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at
org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:64)
~[spring-security-core-3.2.7.RELEASE.jar:3.2.7.RELEASE]
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653)
~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at
org.apache.nifi.web.api.ProcessGroupResource$$EnhancerBySpringCGLIB$$e8bd6ff5.createProcessGroupReference()
~[spring-core-4.1.6.RELEASE.jar:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.8.0_45]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
~[na:1.8.0_45]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.8.0_45]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_45]
at
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
~[jersey-server-1.19.jar:1.19]
at
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
~[jersey-server-1.19.jar:1.19]
at
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
~[jersey-server-1.19.jar:1.19]
at
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
~[jersey-server-1.19.jar:1.19]
at
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
~[jersey-server-1.19.jar:1.19]
at
com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
~[jersey-server-1.19.jar:1.19]
at
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
~[jersey-server-1.19.jar:1.19]
at
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
~[jers

Nifi cluster setup - Safe mode error

2015-10-11 Thread Chakrader Dewaragatla
Hi – I have new nifi cluster setup with NCM and a slave node. Configuration was 
pretty seamless, when I try to create a processor, it spit following error. 
Cluster dashboard says all connected nodes as 1 and no errors shown. (By the 
way, I have two nodes has different java versions. 1.7 and 1.8)

Cluster is unable to service request to change flow: Received a mutable request 
[PUT -- 
https://ncm.example.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966<https://chakri.bityota.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966>]
 while in safe mode

Following detailed error.


2015-10-11 00:22:17,424 DEBUG [NiFi Web Server-73] 
c.s.j.spi.container.ContainerResponse Mapped exception to response: 409 
(Conflict)
org.apache.nifi.cluster.manager.exception.SafeModeMutableRequestException: 
Received a mutable request [PUT -- 
https://ncm.example.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966<https://chakri.bityota.com/nifi-api/controller/process-groups/fa10cff3-ff22-4132-8c65-a249f8bd0fa4/process-group-references/a554ec77-7ca5-4d96-b8ad-9699ea865966>]
 while in safe mode
at 
org.apache.nifi.cluster.manager.impl.WebClusterManager.applyRequest(WebClusterManager.java:2079)
 ~[nifi-framework-cluster-0.3.0.jar:0.3.0]
at 
org.apache.nifi.cluster.manager.impl.WebClusterManager.applyRequest(WebClusterManager.java:2063)
 ~[nifi-framework-cluster-0.3.0.jar:0.3.0]
at 
org.apache.nifi.web.api.ProcessGroupResource.createProcessGroupReference(ProcessGroupResource.java:982)
 ~[classes/:na]
at 
org.apache.nifi.web.api.ProcessGroupResource.createProcessGroupReference(ProcessGroupResource.java:909)
 ~[classes/:na]
at 
org.apache.nifi.web.api.ProcessGroupResource$$FastClassBySpringCGLIB$$3adbdda6.invoke()
 ~[spring-core-4.1.6.RELEASE.jar:na]
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) 
~[spring-core-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717)
 ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
 ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at 
org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:64)
 ~[spring-security-core-3.2.7.RELEASE.jar:3.2.7.RELEASE]
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
 ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at 
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653)
 ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at 
org.apache.nifi.web.api.ProcessGroupResource$$EnhancerBySpringCGLIB$$e8bd6ff5.createProcessGroupReference()
 ~[spring-core-4.1.6.RELEASE.jar:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_45]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[na:1.8.0_45]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_45]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_45]
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 ~[jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 ~[jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 ~[jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
 ~[jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 ~[jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
 ~[jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 ~[jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 ~[jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 ~[jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 ~[jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
 [jersey-server-1.19.

Re: nifi Cluster setup issue

2015-09-30 Thread Aldrin Piri
Chakrader,

You certainly can have hybrid nodes although we tend to avoid that if
possible.  The key point to keep in mind is your allocation of ports for
your http server and clustering protocol as there are two processes living
on one system.  It looks like from the configurations you have previously
shared, your clustering ports are okay, just be sure to confirm that is the
case with your web server.

In terms of the authority provider, the only currently provided
implementation in this release is that of PKI via two-way SSL as outlined
in the Administrator Guide [1].  Additional providers are an area under
current development as per NIFI-655 [2].


[1]
http://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#controlling-levels-of-access
[2] https://issues.apache.org/jira/browse/NIFI-655

On Wed, Sep 30, 2015 at 1:37 PM, Chakrader Dewaragatla <
chakrader.dewaraga...@lifelock.com> wrote:

> Thanks Corey, below settings on Master node works.
>
> Question, can I have master and slave on one node (Hybrid)  ?
>
> Can you send me details on setting up authority provider ? Meantime I will
> go through the documentation and give a try.
>
> On 9/29/15, 6:04 PM, "Corey Flowers"  wrote:
>
> >Did you try setting the
> >nifi.cluster.node.unicast.manager.address=
> >nifi.cluster.manager.address=
> >
> >To
> >nifi.cluster.node.unicast.manager.address=10.233.2.40
> >nifi.cluster.manager.address=10.233.2.40
> >
> >
> >
> >Also if you are running the cluster you should setup the authority
> >provider. Let me know if you need help with that.
> >
> >Sent from my iPhone
> >
> >> On Sep 29, 2015, at 8:49 PM, Chakrader Dewaragatla
> >> wrote:
> >>
> >> 10.233.2.40
>
> 
>  The information contained in this transmission may contain privileged and
> confidential information. It is intended only for the use of the person(s)
> named above. If you are not the intended recipient, you are hereby notified
> that any review, dissemination, distribution or duplication of this
> communication is strictly prohibited. If you are not the intended
> recipient, please contact the sender by reply email and destroy all copies
> of the original message.
> 
>


Re: nifi Cluster setup issue

2015-09-30 Thread Chakrader Dewaragatla
Thanks Corey, below settings on Master node works.

Question, can I have master and slave on one node (Hybrid)  ?

Can you send me details on setting up authority provider ? Meantime I will
go through the documentation and give a try.

On 9/29/15, 6:04 PM, "Corey Flowers"  wrote:

>Did you try setting the
>nifi.cluster.node.unicast.manager.address=
>nifi.cluster.manager.address=
>
>To
>nifi.cluster.node.unicast.manager.address=10.233.2.40
>nifi.cluster.manager.address=10.233.2.40
>
>
>
>Also if you are running the cluster you should setup the authority
>provider. Let me know if you need help with that.
>
>Sent from my iPhone
>
>> On Sep 29, 2015, at 8:49 PM, Chakrader Dewaragatla
>> wrote:
>>
>> 10.233.2.40


 The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.



Re: nifi Cluster setup issue

2015-09-29 Thread Chakrader Dewaragatla
Thanks Aldrin/Corey.  I will try it tomorrow morning.

From: Aldrin Piri mailto:aldrinp...@gmail.com>>
Reply-To: "users@nifi.apache.org<mailto:users@nifi.apache.org>" 
mailto:users@nifi.apache.org>>
Date: Tuesday, September 29, 2015 at 6:44 PM
To: "users@nifi.apache.org<mailto:users@nifi.apache.org>" 
mailto:users@nifi.apache.org>>
Subject: Re: nifi Cluster setup issue

Oops, definitely missed what Corey sent out.   Please specify the 
nifi.cluster.manager.address as he suggests.

On Tue, Sep 29, 2015 at 9:40 PM, Aldrin Piri 
mailto:aldrinp...@gmail.com>> wrote:
Chakrader,

You would also need to set the nifi.web.http.host for the manager as well.  
Each member of the cluster provides how they can be accessed in the protocol.  
This would explain what you are seeing in the node from the master/manager.  
Please try also setting the manager and let us know if this gets your cluster 
up and running.

On Tue, Sep 29, 2015 at 8:48 PM, Chakrader Dewaragatla 
mailto:chakrader.dewaraga...@lifelock.com>> 
wrote:
Aldrin  - I redeployed with nifi with default settings and modified the 
required settings needed for cluster setup documented in 
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html.

I tried to change nifi.web.http.host property on Node (slave) with its ip. On 
slave I notice following error:


2015-09-30 00:44:21,855 INFO [main] o.a.nifi.controller.StandardFlowService 
Connecting Node: [id=75baec43-adf1-4e17-98fd-49111a5a0c76, 
apiAddress=10.233.2.42, apiPort=8080, socketAddress=10.233.2.42, 
socketPort=3002]



On Master:

As usual:


2015-09-30 00:44:51,036 INFO [Process Pending Heartbeats] 
org.apache.nifi.cluster.heartbeat Received heartbeat for node 
[id=644370b1-4d8f-4004-ac6c-8bd614a1890b, apiAddress=localhost, apiPort=8080, 
socketAddress=10.233.2.42, socketPort=3002].


Here is my complete conf file :


Master conf file:



# Core Properties #

nifi.version=0.3.0

nifi.flow.configuration.file=./conf/flow.xml.gz

nifi.flow.configuration.archive.dir=./conf/archive/

nifi.flowcontroller.autoResumeState=true

nifi.flowcontroller.graceful.shutdown.period=10 sec

nifi.flowservice.writedelay.interval=500 ms

nifi.administrative.yield.duration=30 sec

# If a component has no work to do (is "bored"), how long should we wait before 
checking again for work?

nifi.bored.yield.duration=10 millis


nifi.authority.provider.configuration.file=./conf/authority-providers.xml

nifi.templates.directory=./conf/templates

nifi.ui.banner.text=

nifi.ui.autorefresh.interval=30 sec

nifi.nar.library.directory=./lib

nifi.nar.working.directory=./work/nar/

nifi.documentation.working.directory=./work/docs/components


# H2 Settings

nifi.database.directory=./database_repository

nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE


# FlowFile Repository

nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository

nifi.flowfile.repository.directory=./flowfile_repository

nifi.flowfile.repository.partitions=256

nifi.flowfile.repository.checkpoint.interval=2 mins

nifi.flowfile.repository.always.sync=false


nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager

nifi.queue.swap.threshold=2

nifi.swap.in.period=5 sec

nifi.swap.in.threads=1

nifi.swap.out.period=5 sec

nifi.swap.out.threads=4


# Content Repository

nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository

nifi.content.claim.max.appendable.size=10 MB

nifi.content.claim.max.flow.files=100

nifi.content.repository.directory.default=./content_repository

nifi.content.repository.archive.max.retention.period=12 hours

nifi.content.repository.archive.max.usage.percentage=50%

nifi.content.repository.archive.enabled=true

nifi.content.repository.always.sync=false

nifi.content.viewer.url=/nifi-content-viewer/


# Provenance Repository Properties

nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository


# Persistent Provenance Repository Properties

nifi.provenance.repository.directory.default=./provenance_repository

nifi.provenance.repository.max.storage.time=24 hours

nifi.provenance.repository.max.storage.size=1 GB

nifi.provenance.repository.rollover.time=30 secs

nifi.provenance.repository.rollover.size=100 MB

nifi.provenance.repository.query.threads=2

nifi.provenance.repository.index.threads=1

nifi.provenance.repository.compress.on.rollover=true

nifi.provenance.repository.always.sync=false

nifi.provenance.repository.journal.count=16

# Comma-separated list of fields. Fields that are not indexed will not be 
searchable. Valid fields are:

# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, ContentType, Relationship, Details

nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relati

Re: nifi Cluster setup issue

2015-09-29 Thread Aldrin Piri
Type, FlowFileUUID, Filename, TransitURI, ProcessorID,
>> AlternateIdentifierURI, ContentType, Relationship, Details
>>
>> nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID,
>> Filename, ProcessorID, Relationship
>>
>> # FlowFile Attributes that should be indexed and made searchable
>>
>> nifi.provenance.repository.indexed.attributes=
>>
>> # Large values for the shard size will result in more Java heap usage
>> when searching the Provenance Repository
>>
>> # but should provide better performance
>>
>> nifi.provenance.repository.index.shard.size=500 MB
>>
>> # Indicates the maximum length that a FlowFile attribute can be when
>> retrieving a Provenance Event from
>>
>> # the repository. If the length of any attribute exceeds this value, it
>> will be truncated when the event is retrieved.
>>
>> nifi.provenance.repository.max.attribute.length=65536
>>
>>
>> # Volatile Provenance Respository Properties
>>
>> nifi.provenance.repository.buffer.size=10
>>
>>
>> # Component Status Repository
>>
>>
>> nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
>>
>> nifi.components.status.repository.buffer.size=1440
>>
>> nifi.components.status.snapshot.frequency=1 min
>>
>>
>> # Site to Site properties
>>
>> nifi.remote.input.socket.host=
>>
>> nifi.remote.input.socket.port=
>>
>> nifi.remote.input.secure=true
>>
>>
>> # web properties #
>>
>> nifi.web.war.directory=./lib
>>
>> nifi.web.http.host=10.233.2.42
>>
>> nifi.web.http.port=8080
>>
>> nifi.web.https.host=
>>
>> nifi.web.https.port=
>>
>> nifi.web.jetty.working.directory=./work/jetty
>>
>> nifi.web.jetty.threads=200
>>
>>
>> # security properties #
>>
>> nifi.sensitive.props.key=
>>
>> nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
>>
>> nifi.sensitive.props.provider=BC
>>
>>
>> nifi.security.keystore=
>>
>> nifi.security.keystoreType=
>>
>> nifi.security.keystorePasswd=
>>
>> nifi.security.keyPasswd=
>>
>> nifi.security.truststore=
>>
>> nifi.security.truststoreType=
>>
>> nifi.security.truststorePasswd=
>>
>> nifi.security.needClientAuth=
>>
>> nifi.security.user.credential.cache.duration=24 hours
>>
>> nifi.security.user.authority.provider=file-provider
>>
>> nifi.security.support.new.account.requests=
>>
>> nifi.security.ocsp.responder.url=
>>
>> nifi.security.ocsp.responder.certificate=
>>
>>
>> # cluster common properties (cluster manager and nodes must have same
>> values) #
>>
>> nifi.cluster.protocol.heartbeat.interval=5 sec
>>
>> nifi.cluster.protocol.is.secure=false
>>
>> nifi.cluster.protocol.socket.timeout=30 sec
>>
>> nifi.cluster.protocol.connection.handshake.timeout=45 sec
>>
>> # if multicast is used, then nifi.cluster.protocol.multicast.xxx
>> properties must be configured #
>>
>> nifi.cluster.protocol.use.multicast=false
>>
>> nifi.cluster.protocol.multicast.address=
>>
>> nifi.cluster.protocol.multicast.port=
>>
>> nifi.cluster.protocol.multicast.service.broadcast.delay=500 ms
>>
>> nifi.cluster.protocol.multicast.service.locator.attempts=3
>>
>> nifi.cluster.protocol.multicast.service.locator.attempts.delay=1 sec
>>
>>
>> # cluster node properties (only configure for cluster nodes) #
>>
>> nifi.cluster.is.node=true
>>
>> nifi.cluster.node.address=10.233.2.42
>>
>> nifi.cluster.node.protocol.port=3002
>>
>> nifi.cluster.node.protocol.threads=2
>>
>> # if multicast is not used, nifi.cluster.node.unicast.xxx must have same
>> values as nifi.cluster.manager.xxx #
>>
>> nifi.cluster.node.unicast.manager.address=10.233.2.40
>>
>> nifi.cluster.node.unicast.manager.protocol.port=3001
>>
>>
>> # cluster manager properties (only configure for cluster manager) #
>>
>> nifi.cluster.is.manager=false
>>
>> nifi.cluster.manager.address=
>>
>> nifi.cluster.manager.protocol.port=
>>
>> nifi.cluster.manager.node.firewall.file=
>>
>> nifi.cluster.manager.node.event.history.size=10
>>
>> nifi.cluster.manager.node.api.connection.timeout=30 sec
>>
>> nifi.cluster.manager.node.api.read.timeout=30 sec
>>
&

Re: nifi Cluster setup issue

2015-09-29 Thread Aldrin Piri
ty.keystorePasswd=
>
> nifi.security.keyPasswd=
>
> nifi.security.truststore=
>
> nifi.security.truststoreType=
>
> nifi.security.truststorePasswd=
>
> nifi.security.needClientAuth=
>
> nifi.security.user.credential.cache.duration=24 hours
>
> nifi.security.user.authority.provider=file-provider
>
> nifi.security.support.new.account.requests=
>
> nifi.security.ocsp.responder.url=
>
> nifi.security.ocsp.responder.certificate=
>
>
> # cluster common properties (cluster manager and nodes must have same
> values) #
>
> nifi.cluster.protocol.heartbeat.interval=5 sec
>
> nifi.cluster.protocol.is.secure=false
>
> nifi.cluster.protocol.socket.timeout=30 sec
>
> nifi.cluster.protocol.connection.handshake.timeout=45 sec
>
> # if multicast is used, then nifi.cluster.protocol.multicast.xxx
> properties must be configured #
>
> nifi.cluster.protocol.use.multicast=false
>
> nifi.cluster.protocol.multicast.address=
>
> nifi.cluster.protocol.multicast.port=
>
> nifi.cluster.protocol.multicast.service.broadcast.delay=500 ms
>
> nifi.cluster.protocol.multicast.service.locator.attempts=3
>
> nifi.cluster.protocol.multicast.service.locator.attempts.delay=1 sec
>
>
> # cluster node properties (only configure for cluster nodes) #
>
> nifi.cluster.is.node=true
>
> nifi.cluster.node.address=10.233.2.42
>
> nifi.cluster.node.protocol.port=3002
>
> nifi.cluster.node.protocol.threads=2
>
> # if multicast is not used, nifi.cluster.node.unicast.xxx must have same
> values as nifi.cluster.manager.xxx #
>
> nifi.cluster.node.unicast.manager.address=10.233.2.40
>
> nifi.cluster.node.unicast.manager.protocol.port=3001
>
>
> # cluster manager properties (only configure for cluster manager) #
>
> nifi.cluster.is.manager=false
>
> nifi.cluster.manager.address=
>
> nifi.cluster.manager.protocol.port=
>
> nifi.cluster.manager.node.firewall.file=
>
> nifi.cluster.manager.node.event.history.size=10
>
> nifi.cluster.manager.node.api.connection.timeout=30 sec
>
> nifi.cluster.manager.node.api.read.timeout=30 sec
>
> nifi.cluster.manager.node.api.request.threads=10
>
> nifi.cluster.manager.flow.retrieval.delay=5 sec
>
> nifi.cluster.manager.protocol.threads=10
>
> nifi.cluster.manager.safemode.duration=0 sec
>
>
> # kerberos #
>
> nifi.kerberos.krb5.file=
>
> From: Aldrin Piri 
> Reply-To: "users@nifi.apache.org" 
> Date: Tuesday, September 29, 2015 at 4:26 PM
> To: "users@nifi.apache.org" 
> Subject: Re: nifi Cluster setup issue
>
> Chakrader,
>
> I suspect that the nifi.web.http.host property is not using the same
> address as that specified and is transmitting "localhost" (the system's
> response to a localhost hostname lookup from Java).  While the clustering
> protocol communicates via the properties you list, the actual
> command-control and replication of requests from master to slave nodes is
> carried out via the REST API which also runs on the web tier.  The system's
> hostname, as previously determined, is transmitted as part of the
> clustering handshake.
>
> Either the system needs to have it report a valid hostname or a host needs
> to be specified for nifi.web.http.host.  In either case of hostname or
> specified host, each must be network reachable from the master and able to
> be bound to locally within your server.
>
> Let us know if you need additional direction and we'd be happy to help you
> through the process.
>
> Thanks!
>
> On Tue, Sep 29, 2015 at 6:56 PM, Chakrader Dewaragatla <
> chakrader.dewaraga...@lifelock.com> wrote:
>
>> Hi – We are exploring nifi for our workflow management, I have a cluster
>> setup with 3 nodes. One as master and rest as slaves.
>>
>> I see following error when I try to access the nifi workflow webpage.
>>
>> 2015-09-29 22:46:13,263 WARN [NiFi Web Server-23]
>> o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for
>> [id=7481fca5-930c-4d4b-84a3-66cc62b4e2d3, apiAddress=localhost,
>> apiPort=8080, socketAddress=localhost, socketPort=3002] encountered
>> exception: java.util.concurrent.ExecutionException:
>> com.sun.jersey.api.client.ClientHandlerException:
>> java.net.ConnectException: Connection refused
>>
>> 2015-09-29 22:46:13,263 WARN [NiFi Web Server-23]
>> o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for
>> [id=0abd8295-34a3-4bf7-ab06-1b6b94014740, apiAddress=localhost,
>> apiPort=8080, socketAddress=10.233.2.42, socketPort=3002] encountered
>> exception: java.util.concurrent.ExecutionException:
>> com.sun.jersey.api.client.ClientHandlerException:
>> java.ne

Re: nifi Cluster setup issue

2015-09-29 Thread Corey Flowers
Did you try setting the
nifi.cluster.node.unicast.manager.address=
nifi.cluster.manager.address=

To
nifi.cluster.node.unicast.manager.address=10.233.2.40
nifi.cluster.manager.address=10.233.2.40



Also if you are running the cluster you should setup the authority
provider. Let me know if you need help with that.

Sent from my iPhone

> On Sep 29, 2015, at 8:49 PM, Chakrader Dewaragatla 
>  wrote:
>
> 10.233.2.40


Re: nifi Cluster setup issue

2015-09-29 Thread Chakrader Dewaragatla
rollover.size=100 MB

nifi.provenance.repository.query.threads=2

nifi.provenance.repository.index.threads=1

nifi.provenance.repository.compress.on.rollover=true

nifi.provenance.repository.always.sync=false

nifi.provenance.repository.journal.count=16

# Comma-separated list of fields. Fields that are not indexed will not be 
searchable. Valid fields are:

# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, ContentType, Relationship, Details

nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relationship

# FlowFile Attributes that should be indexed and made searchable

nifi.provenance.repository.indexed.attributes=

# Large values for the shard size will result in more Java heap usage when 
searching the Provenance Repository

# but should provide better performance

nifi.provenance.repository.index.shard.size=500 MB

# Indicates the maximum length that a FlowFile attribute can be when retrieving 
a Provenance Event from

# the repository. If the length of any attribute exceeds this value, it will be 
truncated when the event is retrieved.

nifi.provenance.repository.max.attribute.length=65536


# Volatile Provenance Respository Properties

nifi.provenance.repository.buffer.size=10


# Component Status Repository

nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository

nifi.components.status.repository.buffer.size=1440

nifi.components.status.snapshot.frequency=1 min


# Site to Site properties

nifi.remote.input.socket.host=

nifi.remote.input.socket.port=

nifi.remote.input.secure=true


# web properties #

nifi.web.war.directory=./lib

nifi.web.http.host=10.233.2.42

nifi.web.http.port=8080

nifi.web.https.host=

nifi.web.https.port=

nifi.web.jetty.working.directory=./work/jetty

nifi.web.jetty.threads=200


# security properties #

nifi.sensitive.props.key=

nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL

nifi.sensitive.props.provider=BC


nifi.security.keystore=

nifi.security.keystoreType=

nifi.security.keystorePasswd=

nifi.security.keyPasswd=

nifi.security.truststore=

nifi.security.truststoreType=

nifi.security.truststorePasswd=

nifi.security.needClientAuth=

nifi.security.user.credential.cache.duration=24 hours

nifi.security.user.authority.provider=file-provider

nifi.security.support.new.account.requests=

nifi.security.ocsp.responder.url=

nifi.security.ocsp.responder.certificate=


# cluster common properties (cluster manager and nodes must have same values) #

nifi.cluster.protocol.heartbeat.interval=5 sec

nifi.cluster.protocol.is.secure=false

nifi.cluster.protocol.socket.timeout=30 sec

nifi.cluster.protocol.connection.handshake.timeout=45 sec

# if multicast is used, then nifi.cluster.protocol.multicast.xxx properties 
must be configured #

nifi.cluster.protocol.use.multicast=false

nifi.cluster.protocol.multicast.address=

nifi.cluster.protocol.multicast.port=

nifi.cluster.protocol.multicast.service.broadcast.delay=500 ms

nifi.cluster.protocol.multicast.service.locator.attempts=3

nifi.cluster.protocol.multicast.service.locator.attempts.delay=1 sec


# cluster node properties (only configure for cluster nodes) #

nifi.cluster.is.node=true

nifi.cluster.node.address=10.233.2.42

nifi.cluster.node.protocol.port=3002

nifi.cluster.node.protocol.threads=2

# if multicast is not used, nifi.cluster.node.unicast.xxx must have same values 
as nifi.cluster.manager.xxx #

nifi.cluster.node.unicast.manager.address=10.233.2.40

nifi.cluster.node.unicast.manager.protocol.port=3001


# cluster manager properties (only configure for cluster manager) #

nifi.cluster.is.manager=false

nifi.cluster.manager.address=

nifi.cluster.manager.protocol.port=

nifi.cluster.manager.node.firewall.file=

nifi.cluster.manager.node.event.history.size=10

nifi.cluster.manager.node.api.connection.timeout=30 sec

nifi.cluster.manager.node.api.read.timeout=30 sec

nifi.cluster.manager.node.api.request.threads=10

nifi.cluster.manager.flow.retrieval.delay=5 sec

nifi.cluster.manager.protocol.threads=10

nifi.cluster.manager.safemode.duration=0 sec


# kerberos #

nifi.kerberos.krb5.file=

From: Aldrin Piri mailto:aldrinp...@gmail.com>>
Reply-To: "users@nifi.apache.org<mailto:users@nifi.apache.org>" 
mailto:users@nifi.apache.org>>
Date: Tuesday, September 29, 2015 at 4:26 PM
To: "users@nifi.apache.org<mailto:users@nifi.apache.org>" 
mailto:users@nifi.apache.org>>
Subject: Re: nifi Cluster setup issue

Chakrader,

I suspect that the nifi.web.http.host property is not using the same address as 
that specified and is transmitting "localhost" (the system's response to a 
localhost hostname lookup from Java).  While the clustering protocol 
communicates via the properties you list, the actual command-control and 
replication of requests from master to slave nodes is carried out via the REST 
API which also runs on th

Re: nifi Cluster setup issue

2015-09-29 Thread Aldrin Piri
Chakrader,

I suspect that the nifi.web.http.host property is not using the same
address as that specified and is transmitting "localhost" (the system's
response to a localhost hostname lookup from Java).  While the clustering
protocol communicates via the properties you list, the actual
command-control and replication of requests from master to slave nodes is
carried out via the REST API which also runs on the web tier.  The system's
hostname, as previously determined, is transmitted as part of the
clustering handshake.

Either the system needs to have it report a valid hostname or a host needs
to be specified for nifi.web.http.host.  In either case of hostname or
specified host, each must be network reachable from the master and able to
be bound to locally within your server.

Let us know if you need additional direction and we'd be happy to help you
through the process.

Thanks!

On Tue, Sep 29, 2015 at 6:56 PM, Chakrader Dewaragatla <
chakrader.dewaraga...@lifelock.com> wrote:

> Hi – We are exploring nifi for our workflow management, I have a cluster
> setup with 3 nodes. One as master and rest as slaves.
>
> I see following error when I try to access the nifi workflow webpage.
>
> 2015-09-29 22:46:13,263 WARN [NiFi Web Server-23]
> o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for
> [id=7481fca5-930c-4d4b-84a3-66cc62b4e2d3, apiAddress=localhost,
> apiPort=8080, socketAddress=localhost, socketPort=3002] encountered
> exception: java.util.concurrent.ExecutionException:
> com.sun.jersey.api.client.ClientHandlerException:
> java.net.ConnectException: Connection refused
>
> 2015-09-29 22:46:13,263 WARN [NiFi Web Server-23]
> o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for
> [id=0abd8295-34a3-4bf7-ab06-1b6b94014740, apiAddress=localhost,
> apiPort=8080, socketAddress=10.233.2.42, socketPort=3002] encountered
> exception: java.util.concurrent.ExecutionException:
> com.sun.jersey.api.client.ClientHandlerException:
> java.net.ConnectException: Connection refused
>
> 2015-09-29 22:46:13,264 INFO [NiFi Web Server-23]
> o.a.n.c.m.e.NoConnectedNodesException
> org.apache.nifi.cluster.manager.exception.NoResponseFromNodesException: No
> nodes were able to process this request.. Returning Conflict response.
>
>
> Master is not hybrid, I wonder why it is trying to self connect 3002.
>
>
> Master settings:
>
> # cluster manager properties (only configure for cluster manager) #
>
> nifi.cluster.is.manager=true
>
> nifi.cluster.manager.address=10.233.2.40
>
> nifi.cluster.manager.protocol.port=3001
>
> nifi.cluster.manager.node.firewall.file=
>
> nifi.cluster.manager.node.event.history.size=10
>
> nifi.cluster.manager.node.api.connection.timeout=30 sec
>
> nifi.cluster.manager.node.api.read.timeout=30 sec
>
> nifi.cluster.manager.node.api.request.threads=10
>
> nifi.cluster.manager.flow.retrieval.delay=5 sec
>
> nifi.cluster.manager.protocol.threads=10
>
> nifi.cluster.manager.safemode.duration=0 sec
>
>
> Slave settings:
>
> # cluster node properties (only configure for cluster nodes) #
>
> nifi.cluster.is.node=true
>
> nifi.cluster.node.address=10.233.2.42
>
> nifi.cluster.node.protocol.port=3002
>
> nifi.cluster.node.protocol.threads=2
>
> # if multicast is not used, nifi.cluster.node.unicast.xxx must have same
> values as nifi.cluster.manager.xxx #
>
> nifi.cluster.node.unicast.manager.address=10.233.2.40
>
> nifi.cluster.node.unicast.manager.protocol.port=3001
>
>
>
>
> --
> The information contained in this transmission may contain privileged and
> confidential information. It is intended only for the use of the person(s)
> named above. If you are not the intended recipient, you are hereby notified
> that any review, dissemination, distribution or duplication of this
> communication is strictly prohibited. If you are not the intended
> recipient, please contact the sender by reply email and destroy all copies
> of the original message.
> --
>


nifi Cluster setup issue

2015-09-29 Thread Chakrader Dewaragatla
Hi – We are exploring nifi for our workflow management, I have a cluster setup 
with 3 nodes. One as master and rest as slaves.

I see following error when I try to access the nifi workflow webpage.


2015-09-29 22:46:13,263 WARN [NiFi Web Server-23] 
o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for 
[id=7481fca5-930c-4d4b-84a3-66cc62b4e2d3, apiAddress=localhost, apiPort=8080, 
socketAddress=localhost, socketPort=3002] encountered exception: 
java.util.concurrent.ExecutionException: 
com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: 
Connection refused

2015-09-29 22:46:13,263 WARN [NiFi Web Server-23] 
o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for 
[id=0abd8295-34a3-4bf7-ab06-1b6b94014740, apiAddress=localhost, apiPort=8080, 
socketAddress=10.233.2.42, socketPort=3002] encountered exception: 
java.util.concurrent.ExecutionException: 
com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: 
Connection refused

2015-09-29 22:46:13,264 INFO [NiFi Web Server-23] 
o.a.n.c.m.e.NoConnectedNodesException 
org.apache.nifi.cluster.manager.exception.NoResponseFromNodesException: No 
nodes were able to process this request.. Returning Conflict response.


Master is not hybrid, I wonder why it is trying to self connect 3002.


Master settings:

# cluster manager properties (only configure for cluster manager) #

nifi.cluster.is.manager=true

nifi.cluster.manager.address=10.233.2.40

nifi.cluster.manager.protocol.port=3001

nifi.cluster.manager.node.firewall.file=

nifi.cluster.manager.node.event.history.size=10

nifi.cluster.manager.node.api.connection.timeout=30 sec

nifi.cluster.manager.node.api.read.timeout=30 sec

nifi.cluster.manager.node.api.request.threads=10

nifi.cluster.manager.flow.retrieval.delay=5 sec

nifi.cluster.manager.protocol.threads=10

nifi.cluster.manager.safemode.duration=0 sec


Slave settings:

# cluster node properties (only configure for cluster nodes) #

nifi.cluster.is.node=true

nifi.cluster.node.address=10.233.2.42

nifi.cluster.node.protocol.port=3002

nifi.cluster.node.protocol.threads=2

# if multicast is not used, nifi.cluster.node.unicast.xxx must have same values 
as nifi.cluster.manager.xxx #

nifi.cluster.node.unicast.manager.address=10.233.2.40

nifi.cluster.node.unicast.manager.protocol.port=3001





The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.