Unit test is getting failed for custom ,Awscredentialprovidercontrolerservice

2019-10-21 Thread sanjeet rath
Hi Team,

Our project structure we have a custom controller service
,Awscredentialprovidercontrolerservice controller service  to connect AWS
with our defined 5 attributes means properties.(This is present in a
separate project NIFI-AWS-CUSTOM_PING_CONTROLER).

In NiFi UI this controller service is working fine .But in Unit testing I
am using bellow code to test.

@Test
public void test Awscredentialprovidercontrolerservice() {

final TestRunner runner = TestRunners.newTestRunner(new
puts3Object);
final Awscredentialprovidercontrolerservice  serviceimpl= new
Awscredentialprovidercontrolerservice()
Runner.setproperty(…) //Setting my 5 properties which I have created for my
custom controller service
Runner.enablecontrolerservice(serviceimpl)
 // will do assert  }

Here in enabling controller service gives null pointer exception in the
custom validate method , its excepting all the default Properties also need
to be declared like Accesskey, secret key etc(which is present in the
default Awscredentialprovidercontrolerservice class) in my
custom controller  Awscredentialprovidercontrolerservice.

After declaring the default properties in my custom
Awscredentialprovidercontrolerservice unit test is working fine.But problem
is these parameters are appearing in the NiFi UI of my custom
Awscredentialprovidercontrolerservice.

So I have 2 option, either after declaring of the default properties, is
there any way to stop displaying I Nifi UI.

Or As its working fine in UI flow without setting default properties in
custom Awscredentialprovidercontrolerservice.so should I set something in
Unit test case to make it passed
Thanks & Regards
-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


RE: NiFi Kubernetes question

2019-10-21 Thread crzy210
Any suggestions?

 

I downloaded NiFi, but when I run the runnifi from the bin folder nothing 
happens. I get the following message. The JAVA_HOME environment variable is not 
defined correctly. I downloaded the latest JRE, but still get the same error 
message. 

 

 

From: Swarup Karavadi  
Sent: Monday, October 21, 2019 12:19 PM
To: users@nifi.apache.org
Cc: jgres...@gmail.com
Subject: Re: NiFi Kubernetes question

 

If you are hosting on the cloud, I'd recommend going for dedicated worker nodes 
for the NiFi cluster. There might be rare (or not) occasions when a worker node 
is under high load and needs to evict pods. If your NiFi deployment's pod 
disruption budget allows for eviction of pods then there are always chances 
that an evicted NiFi pod can be rescheduled on a different node that is tainted 
(tainted because the node may not meet the pod's volume affinity requirements). 
Your best case scenario when this happens is that the pod will keep getting 
rescheduled on different nodes until it starts up again. The worst case 
scenario is that it'll be stuck in a CrashLoopBackoff limbo.

 

Disclaimer - I speak from my experience on a non production environment. Our 
NiFi clusters will be deployed to a production k8s environment in a few weeks 
from now. I am only sharing some learnings I've had w.r.t. k8s statefulsets 
along the way.

 

Hope this helps,

Swarup.

 

On Mon, Oct 21, 2019, 9:32 PM Wyllys Ingersoll mailto:wyllys.ingers...@keepertech.com> > wrote:

 

We had success running  a 3-node cluster in kubernetes using modified 
configuration scripts from the AlexJones github repo - 
https://github.com/AlexsJones/nifi 

Ours is on an internal bare-metal k8s lab configuration, not in a public cloud 
at this time, but the basics are the same either way.

 

- setup nifi as a stateful set so you can scale up or down as needed. When a 
pod fails, k8s will spawn another to take its place and zookeeper will manage 
the election of the master during transitions.

- manage your certs as K8S secrets.  

- you also need to also have a stateful set of zookeeper pods for managing the 
nifi servers.

- use persistent volume mounts to hold the flowfile, database, content, and 
provenance _repository directories 

 

 

 

On Mon, Oct 21, 2019 at 11:21 AM Joe Gresock mailto:jgres...@gmail.com> > wrote:

Apologies if this has been answered on the list already..

 

Does anyone have knowledge of the latest in the realm of nifi kubernetes 
support?  I see some pages like https://hub.helm.sh/charts/cetic/nifi, and 
https://github.com/AlexsJones/nifi but am unsure which example to pick to start 
with.

 

I'm curious how well kubernetes maintains the nifi cluster state with pod 
failures.  I.e., do any of the k8s implementations play well with the nifi 
cluster list so that we don't have dangling downed nodes in the cluster?  Also, 
I'm wondering how certs are managed in a secured cluster.

 

Appreciate any nudge in the right direction,

Joe

 

On Mon, Oct 21, 2019, 9:32 PM Wyllys Ingersoll mailto:wyllys.ingers...@keepertech.com> > wrote:

 

We had success running  a 3-node cluster in kubernetes using modified 
configuration scripts from the AlexJones github repo - 
https://github.com/AlexsJones/nifi 

Ours is on an internal bare-metal k8s lab configuration, not in a public cloud 
at this time, but the basics are the same either way.

 

- setup nifi as a stateful set so you can scale up or down as needed. When a 
pod fails, k8s will spawn another to take its place and zookeeper will manage 
the election of the master during transitions.

- manage your certs as K8S secrets.  

- you also need to also have a stateful set of zookeeper pods for managing the 
nifi servers.

- use persistent volume mounts to hold the flowfile, database, content, and 
provenance _repository directories 

 

 

 

On Mon, Oct 21, 2019 at 11:21 AM Joe Gresock mailto:jgres...@gmail.com> > wrote:

Apologies if this has been answered on the list already..

 

Does anyone have knowledge of the latest in the realm of nifi kubernetes 
support?  I see some pages like https://hub.helm.sh/charts/cetic/nifi, and 
https://github.com/AlexsJones/nifi but am unsure which example to pick to start 
with.

 

I'm curious how well kubernetes maintains the nifi cluster state with pod 
failures.  I.e., do any of the k8s implementations play well with the nifi 
cluster list so that we don't have dangling downed nodes in the cluster?  Also, 
I'm wondering how certs are managed in a secured cluster.

 

Appreciate any nudge in the right direction,

Joe



Re: NiFi Kubernetes question

2019-10-21 Thread Swarup Karavadi
If you are hosting on the cloud, I'd recommend going for dedicated worker
nodes for the NiFi cluster. There might be rare (or not) occasions when a
worker node is under high load and needs to evict pods. If your NiFi
deployment's pod disruption budget allows for eviction of pods then there
are always chances that an evicted NiFi pod can be rescheduled on a
different node that is tainted (tainted because the node may not meet the
pod's volume affinity requirements). Your best case scenario when this
happens is that the pod will keep getting rescheduled on different nodes
until it starts up again. The worst case scenario is that it'll be stuck in
a CrashLoopBackoff limbo.

Disclaimer - I speak from my experience on a non production environment.
Our NiFi clusters will be deployed to a production k8s environment in a few
weeks from now. I am only sharing some learnings I've had w.r.t. k8s
statefulsets along the way.

Hope this helps,
Swarup.

On Mon, Oct 21, 2019, 9:32 PM Wyllys Ingersoll <
wyllys.ingers...@keepertech.com> wrote:

>
> We had success running  a 3-node cluster in kubernetes using modified
> configuration scripts from the AlexJones github repo -
> https://github.com/AlexsJones/nifi
> Ours is on an internal bare-metal k8s lab configuration, not in a public
> cloud at this time, but the basics are the same either way.
>
> - setup nifi as a stateful set so you can scale up or down as needed. When
> a pod fails, k8s will spawn another to take its place and zookeeper will
> manage the election of the master during transitions.
> - manage your certs as K8S secrets.
> - you also need to also have a stateful set of zookeeper pods for managing
> the nifi servers.
> - use persistent volume mounts to hold the flowfile, database, content,
> and provenance _repository directories
>
>
>
> On Mon, Oct 21, 2019 at 11:21 AM Joe Gresock  wrote:
>
>> Apologies if this has been answered on the list already..
>>
>> Does anyone have knowledge of the latest in the realm of nifi kubernetes
>> support?  I see some pages like https://hub.helm.sh/charts/cetic/nifi,
>> and https://github.com/AlexsJones/nifi but am unsure which example to
>> pick to start with.
>>
>> I'm curious how well kubernetes maintains the nifi cluster state with pod
>> failures.  I.e., do any of the k8s implementations play well with the nifi
>> cluster list so that we don't have dangling downed nodes in the cluster?
>> Also, I'm wondering how certs are managed in a secured cluster.
>>
>> Appreciate any nudge in the right direction,
>> Joe
>>
>
On Mon, Oct 21, 2019, 9:32 PM Wyllys Ingersoll <
wyllys.ingers...@keepertech.com> wrote:

>
> We had success running  a 3-node cluster in kubernetes using modified
> configuration scripts from the AlexJones github repo -
> https://github.com/AlexsJones/nifi
> Ours is on an internal bare-metal k8s lab configuration, not in a public
> cloud at this time, but the basics are the same either way.
>
> - setup nifi as a stateful set so you can scale up or down as needed. When
> a pod fails, k8s will spawn another to take its place and zookeeper will
> manage the election of the master during transitions.
> - manage your certs as K8S secrets.
> - you also need to also have a stateful set of zookeeper pods for managing
> the nifi servers.
> - use persistent volume mounts to hold the flowfile, database, content,
> and provenance _repository directories
>
>
>
> On Mon, Oct 21, 2019 at 11:21 AM Joe Gresock  wrote:
>
>> Apologies if this has been answered on the list already..
>>
>> Does anyone have knowledge of the latest in the realm of nifi kubernetes
>> support?  I see some pages like https://hub.helm.sh/charts/cetic/nifi,
>> and https://github.com/AlexsJones/nifi but am unsure which example to
>> pick to start with.
>>
>> I'm curious how well kubernetes maintains the nifi cluster state with pod
>> failures.  I.e., do any of the k8s implementations play well with the nifi
>> cluster list so that we don't have dangling downed nodes in the cluster?
>> Also, I'm wondering how certs are managed in a secured cluster.
>>
>> Appreciate any nudge in the right direction,
>> Joe
>>
>


Re: NiFi Kubernetes question

2019-10-21 Thread Wyllys Ingersoll
We had success running  a 3-node cluster in kubernetes using modified
configuration scripts from the AlexJones github repo -
https://github.com/AlexsJones/nifi
Ours is on an internal bare-metal k8s lab configuration, not in a public
cloud at this time, but the basics are the same either way.

- setup nifi as a stateful set so you can scale up or down as needed. When
a pod fails, k8s will spawn another to take its place and zookeeper will
manage the election of the master during transitions.
- manage your certs as K8S secrets.
- you also need to also have a stateful set of zookeeper pods for managing
the nifi servers.
- use persistent volume mounts to hold the flowfile, database, content, and
provenance _repository directories



On Mon, Oct 21, 2019 at 11:21 AM Joe Gresock  wrote:

> Apologies if this has been answered on the list already..
>
> Does anyone have knowledge of the latest in the realm of nifi kubernetes
> support?  I see some pages like https://hub.helm.sh/charts/cetic/nifi,
> and https://github.com/AlexsJones/nifi but am unsure which example to
> pick to start with.
>
> I'm curious how well kubernetes maintains the nifi cluster state with pod
> failures.  I.e., do any of the k8s implementations play well with the nifi
> cluster list so that we don't have dangling downed nodes in the cluster?
> Also, I'm wondering how certs are managed in a secured cluster.
>
> Appreciate any nudge in the right direction,
> Joe
>


NiFi Kubernetes question

2019-10-21 Thread Joe Gresock
Apologies if this has been answered on the list already..

Does anyone have knowledge of the latest in the realm of nifi kubernetes
support?  I see some pages like https://hub.helm.sh/charts/cetic/nifi, and
https://github.com/AlexsJones/nifi but am unsure which example to pick to
start with.

I'm curious how well kubernetes maintains the nifi cluster state with pod
failures.  I.e., do any of the k8s implementations play well with the nifi
cluster list so that we don't have dangling downed nodes in the cluster?
Also, I'm wondering how certs are managed in a secured cluster.

Appreciate any nudge in the right direction,
Joe