Suggestions about the best way to count the status of several flows

2020-03-05 Thread Jairo Henao
Hi all,

I have a flow that is split into several fragments, processed one by one
and finally mixed to notify the status.

If the MergeContent processor only maintains the common attributes, what
would be the best way to count the final state of each flow (Successful,
Failed, Ignored)?

Maybe I use a distributed cache? But I only need one number for each state

Nifi counters would need to be cleaned with ExecuteScript so that they do
not accumulate data between runs.

I have the option of accumulating in a database table, but I would like
something lighter.

Thanks


-- 
Saludos


Re: NiFi to NiFi Registry error: "Untrusted proxy ... for write operation"

2020-03-05 Thread Bryan Bende
There was a bug in the 0.5.0 release that caused group-based policies to
not work correctly for proxies [1].

Can you try adding the user that represents the nifi instance directly to
the Proxy policy in registry?

[1] https://issues.apache.org/jira/browse/NIFIREG-358

On Thu, Mar 5, 2020 at 1:19 PM Joseph Wheeler 
wrote:

> Hello!
>
> I am having issues getting NiFi Registry to work properly.
>
> I have NiFi and NiFi Registry running, both configured to use SSL, both
> using the same keystore.jks and truststore.jks files, and both with user
> accounts mapped to PKI certificate FQDNs. I have no issue logging into the
> interfaces for either NiFi or NiFi Registry.
>
> I have added the NiFi registry URL in NiFi under nifi settings -> Registry
> Clients.
>
> I have created a bucket in nifi registry. It is set to be publicly visible
> and has a policy created that gives the user group (which I created in nifi
> registry and has all users in it) all permission options.
>
> In Nifi, I have a user group created with all users in it that have
> maximum permissions for all options in Nifi and on the particular nifi flow
> we're working on.
>
> The issue I have is:
>
> 1.) I log in to NiFi, right-click a process group (doesn't seem to matter
> which one) and click Version -> Start version control.
> 2.) The Save Flow Version wizard pops up, automatically populated with the
> registry name and the bucket name I created in nifi-registry. I enter
> random characters in the 3 empty fields and click Save.
> 3.) Error message appears:
> "Failed to register flow with Flow Registry due to Error creating flow:
> Untrusted proxy [**] for write operation.
> Contact the system administrator."
>
> In the nifi-registry-app.log, I see this message:
> 2020-03-05 18:16:11,272 INFO [NiFi Registry Web Server-17]
> o.a.n.r.w.m.AccessDeniedExceptionMapper identity[**],
> groups[*]* does not have permission to access the
> requested resource. Untrusted proxy  [**]   for
> write operation. Returning Forbidden response.
>
> However, my account has every permission available in both Nifi and
> Nifi-registry.
>
> Any idea where to start?
>


Re: Exception is showing in nifi UI users page

2020-03-05 Thread sanjeet rath
Thanks Mat,

For this quick response.

Thanks a lot ,

Sanjeet

On Thu, 5 Mar 2020, 11:43 pm Matt Gilman,  wrote:

> I just responded to your StackOverflow post:
>
>
> https://stackoverflow.com/questions/60551242/nifi-user-addition-gives-u-null-pointer-exception/60551638#60551638
>
> I believe you'll need to upgrade to a version that addresses the BUG.
>
> Thanks!
>
> On Thu, Mar 5, 2020 at 1:10 PM sanjeet rath 
> wrote:
>
>> Hi Team,
>>
>> I am using nifi cluster from 1 month & i am able to add new user policies
>> everything.its a LDAP based user addition.
>> but suddenly from last 2 days , in nifi user addition page(after clicking
>> on users in nifi UI) i am getting Error message "An unexcepted error has
>> occure.please click logs for more details".
>> and in nifif-user.log i found the bellow log.
>>
>> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred:
>> java.lang.NullPointerException. Returning Internal Server Error response.
>> java.lang.NullPointerException: null at
>> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
>> at
>> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
>> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) at
>> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at
>> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>> at
>> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at
>> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>>
>> I am not able to figuring out where should i start looking out.
>> Could please someone help me to look a starting point where can check and
>> what need to be checked.
>>
>> Thanks,
>> Sanjeet
>>
>>
>> --
>> Sanjeet Kumar Rath,
>> mob- +91 8777577470
>>
>>


NiFi to NiFi Registry error: "Untrusted proxy ... for write operation"

2020-03-05 Thread Joseph Wheeler
Hello!

I am having issues getting NiFi Registry to work properly.

I have NiFi and NiFi Registry running, both configured to use SSL, both
using the same keystore.jks and truststore.jks files, and both with user
accounts mapped to PKI certificate FQDNs. I have no issue logging into the
interfaces for either NiFi or NiFi Registry.

I have added the NiFi registry URL in NiFi under nifi settings -> Registry
Clients.

I have created a bucket in nifi registry. It is set to be publicly visible
and has a policy created that gives the user group (which I created in nifi
registry and has all users in it) all permission options.

In Nifi, I have a user group created with all users in it that have maximum
permissions for all options in Nifi and on the particular nifi flow we're
working on.

The issue I have is:

1.) I log in to NiFi, right-click a process group (doesn't seem to matter
which one) and click Version -> Start version control.
2.) The Save Flow Version wizard pops up, automatically populated with the
registry name and the bucket name I created in nifi-registry. I enter
random characters in the 3 empty fields and click Save.
3.) Error message appears:
"Failed to register flow with Flow Registry due to Error creating flow:
Untrusted proxy [**] for write operation.
Contact the system administrator."

In the nifi-registry-app.log, I see this message:
2020-03-05 18:16:11,272 INFO [NiFi Registry Web Server-17]
o.a.n.r.w.m.AccessDeniedExceptionMapper identity[**],
groups[*]* does not have permission to access the requested
resource. Untrusted proxy  [**]   for write
operation. Returning Forbidden response.

However, my account has every permission available in both Nifi and
Nifi-registry.

Any idea where to start?


Re: Exception is showing in nifi UI users page

2020-03-05 Thread Matt Gilman
I just responded to your StackOverflow post:

https://stackoverflow.com/questions/60551242/nifi-user-addition-gives-u-null-pointer-exception/60551638#60551638

I believe you'll need to upgrade to a version that addresses the BUG.

Thanks!

On Thu, Mar 5, 2020 at 1:10 PM sanjeet rath  wrote:

> Hi Team,
>
> I am using nifi cluster from 1 month & i am able to add new user policies
> everything.its a LDAP based user addition.
> but suddenly from last 2 days , in nifi user addition page(after clicking
> on users in nifi UI) i am getting Error message "An unexcepted error has
> occure.please click logs for more details".
> and in nifif-user.log i found the bellow log.
>
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred:
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null at
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
> at
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) at
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>
> I am not able to figuring out where should i start looking out.
> Could please someone help me to look a starting point where can check and
> what need to be checked.
>
> Thanks,
> Sanjeet
>
>
> --
> Sanjeet Kumar Rath,
> mob- +91 8777577470
>
>


Exception is showing in nifi UI users page

2020-03-05 Thread sanjeet rath
Hi Team,

I am using nifi cluster from 1 month & i am able to add new user policies
everything.its a LDAP based user addition.
but suddenly from last 2 days , in nifi user addition page(after clicking
on users in nifi UI) i am getting Error message "An unexcepted error has
occure.please click logs for more details".
and in nifif-user.log i found the bellow log.

o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred:
java.lang.NullPointerException. Returning Internal Server Error response.
java.lang.NullPointerException: null at
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
at
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) at
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)

I am not able to figuring out where should i start looking out.
Could please someone help me to look a starting point where can check and
what need to be checked.

Thanks,
Sanjeet


-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Re: Metrics via Prometheus

2020-03-05 Thread Eric Ladner
Grr.. you'd think "Users" would have the ability to add items to the "For
Users" section on the wiki.

On Thu, Mar 5, 2020 at 10:33 AM Eric Ladner  wrote:

> eh.. maybe I could throw it on the wiki if somebody grants write access.
>
> On Thu, Mar 5, 2020 at 10:28 AM Joe Witt  wrote:
>
>> Eric,
>>
>> It is probably easier to use blogspot or something like that.  But if you
>> want to offer a guest submission I'm sure we can figure it out for the
>> Apache blog too.  I'm just not sure on the steps.
>>
>> Thanks
>>
>> On Thu, Mar 5, 2020 at 11:27 AM Eric Ladner 
>> wrote:
>>
>>> how would I submit something to the NiFi blog?
>>>
>>> On Thu, Mar 5, 2020 at 6:37 AM Eric Ladner 
>>> wrote:
>>>
 Good idea.  I'll look into that today.

 On Thu, Mar 5, 2020 at 5:56 AM Paul Parker 
 wrote:

> It would be great if you could share your story as a blog post.
>
> Eric Ladner  schrieb am Mi., 4. März 2020,
> 19:45:
>
>> Thank you so much for your guidance.  I was able to get data flowing
>> into Prometheus fairly easily once all the pieces were understood.
>>
>> Now, I just need to dig into Prometheus queries and make some Grafana
>> dashboards.
>>
>> On Tue, Mar 3, 2020 at 2:54 PM Yolanda Davis <
>> yolanda.m.da...@gmail.com> wrote:
>>
>>> Sure not a problem!  Hopefully below thoughts can help you get
>>> started:
>>>
>>> As you may know the PrometheusReportingTask is a bit different from
>>> other tasks in that it actually exposes an endpoint for Prometheus to
>>> scrape (vs. pushing data directly to Prometheus).  When the task is 
>>> started
>>> the endpoint is created on the port you designate under “/metrics”; so 
>>> just
>>> ensure that you don’t have anything already on the port you select. If 
>>> you
>>> want to ensure that you have a secured endpoint for Prometheus to 
>>> connect,
>>> be sure to use a SSL Context Service (a controller service that will 
>>> allow
>>> the reporting task to use the appropriate key/trust stores for TLS). 
>>> Also
>>> you'll want to consider the levels at which you are reporting (Root 
>>> Group,
>>> Process Group or All Components), especially in terms of the amount of 
>>> data
>>> you are looking to send back.  Jvm metrics can be sent as well flow
>>> specific metrics. Finally consider how often metrics should be 
>>> refreshed by
>>> adjusting the Scheduling Strategy in the settings tab for the task.
>>>
>>> When starting the task you should be able to go directly to the
>>> endpoint (without Prometheus) to confirm it’s output (e.g.
>>> http://locahost:9092/metrics ).  You should see a format similar to
>>> what Prometheus supports for it’s scraping jobs (see example
>>> https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-example
>>> )
>>>
>>> On the Prometheus side you’ll want to follow their instructions on
>>> how to setup a scrape configuration that  will point to the newly 
>>> created
>>> metrics endpoint . I’d recommend checking out the first steps for help (
>>> https://prometheus.io/docs/introduction/first_steps/#configuring-prometheus)
>>> and then when you need to provide more advanced settings take a look 
>>> here
>>> https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config.
>>> The key is you’ll want to define a new scrape job that looks at the NiFi
>>> endpoint for scraping.  To start you may want to refer to the cluster
>>> directly but later add the security credentials or use another method 
>>> for
>>> discovering the endpoint.
>>>
>>> Once these configurations are in place, and Prometheus is started
>>> (or restarted) after a few seconds you should begin to see metrics 
>>> landing
>>> when querying in Grafana.
>>>
>>> I hope this helps!  Please let me know if you have any further
>>> questions.
>>>
>>> -yolanda
>>>
>>> On Tue, Mar 3, 2020 at 2:10 PM Eric Ladner 
>>> wrote:
>>>
 Yes, exactly!   Reporting Task -> Prometheus -> Grafana for keeping
 an eye on things running in NiFi.

 If you have any hints/tips on getting things working, I'd be
 grateful.

 On Tue, Mar 3, 2020 at 12:35 PM Yolanda Davis <
 yolanda.m.da...@gmail.com> wrote:

> Hi Eric,
>
> Were you looking to use the Prometheus Reporting Task for making
> metrics available for Prometheus scraping? I don't believe any
> documentation outside of what is in NiFi exists just yet, but I'm 
> happy to
> help answer questions you may have (I've used this task recently).
>
> -yolanda
>
> On Tue, Mar 3, 2020 at 10:51 AM Eric Ladner 
> wrote:
>
>> Is there a 

Re: Metrics via Prometheus

2020-03-05 Thread Eric Ladner
eh.. maybe I could throw it on the wiki if somebody grants write access.

On Thu, Mar 5, 2020 at 10:28 AM Joe Witt  wrote:

> Eric,
>
> It is probably easier to use blogspot or something like that.  But if you
> want to offer a guest submission I'm sure we can figure it out for the
> Apache blog too.  I'm just not sure on the steps.
>
> Thanks
>
> On Thu, Mar 5, 2020 at 11:27 AM Eric Ladner  wrote:
>
>> how would I submit something to the NiFi blog?
>>
>> On Thu, Mar 5, 2020 at 6:37 AM Eric Ladner  wrote:
>>
>>> Good idea.  I'll look into that today.
>>>
>>> On Thu, Mar 5, 2020 at 5:56 AM Paul Parker 
>>> wrote:
>>>
 It would be great if you could share your story as a blog post.

 Eric Ladner  schrieb am Mi., 4. März 2020,
 19:45:

> Thank you so much for your guidance.  I was able to get data flowing
> into Prometheus fairly easily once all the pieces were understood.
>
> Now, I just need to dig into Prometheus queries and make some Grafana
> dashboards.
>
> On Tue, Mar 3, 2020 at 2:54 PM Yolanda Davis <
> yolanda.m.da...@gmail.com> wrote:
>
>> Sure not a problem!  Hopefully below thoughts can help you get
>> started:
>>
>> As you may know the PrometheusReportingTask is a bit different from
>> other tasks in that it actually exposes an endpoint for Prometheus to
>> scrape (vs. pushing data directly to Prometheus).  When the task is 
>> started
>> the endpoint is created on the port you designate under “/metrics”; so 
>> just
>> ensure that you don’t have anything already on the port you select. If 
>> you
>> want to ensure that you have a secured endpoint for Prometheus to 
>> connect,
>> be sure to use a SSL Context Service (a controller service that will 
>> allow
>> the reporting task to use the appropriate key/trust stores for TLS). Also
>> you'll want to consider the levels at which you are reporting (Root 
>> Group,
>> Process Group or All Components), especially in terms of the amount of 
>> data
>> you are looking to send back.  Jvm metrics can be sent as well flow
>> specific metrics. Finally consider how often metrics should be refreshed 
>> by
>> adjusting the Scheduling Strategy in the settings tab for the task.
>>
>> When starting the task you should be able to go directly to the
>> endpoint (without Prometheus) to confirm it’s output (e.g.
>> http://locahost:9092/metrics ).  You should see a format similar to
>> what Prometheus supports for it’s scraping jobs (see example
>> https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-example
>> )
>>
>> On the Prometheus side you’ll want to follow their instructions on
>> how to setup a scrape configuration that  will point to the newly created
>> metrics endpoint . I’d recommend checking out the first steps for help (
>> https://prometheus.io/docs/introduction/first_steps/#configuring-prometheus)
>> and then when you need to provide more advanced settings take a look here
>> https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config.
>> The key is you’ll want to define a new scrape job that looks at the NiFi
>> endpoint for scraping.  To start you may want to refer to the cluster
>> directly but later add the security credentials or use another method for
>> discovering the endpoint.
>>
>> Once these configurations are in place, and Prometheus is started (or
>> restarted) after a few seconds you should begin to see metrics landing 
>> when
>> querying in Grafana.
>>
>> I hope this helps!  Please let me know if you have any further
>> questions.
>>
>> -yolanda
>>
>> On Tue, Mar 3, 2020 at 2:10 PM Eric Ladner 
>> wrote:
>>
>>> Yes, exactly!   Reporting Task -> Prometheus -> Grafana for keeping
>>> an eye on things running in NiFi.
>>>
>>> If you have any hints/tips on getting things working, I'd be
>>> grateful.
>>>
>>> On Tue, Mar 3, 2020 at 12:35 PM Yolanda Davis <
>>> yolanda.m.da...@gmail.com> wrote:
>>>
 Hi Eric,

 Were you looking to use the Prometheus Reporting Task for making
 metrics available for Prometheus scraping? I don't believe any
 documentation outside of what is in NiFi exists just yet, but I'm 
 happy to
 help answer questions you may have (I've used this task recently).

 -yolanda

 On Tue, Mar 3, 2020 at 10:51 AM Eric Ladner 
 wrote:

> Is there a guide to setting up Nifi and Prometheus anywhere?  The
> nar docs are a little vague.
>
> Thanks,
>
> Eric Ladner
>


 --
 --
 yolanda.m.da...@gmail.com
 @YolandaMDavis


>>>
>>> --
>>> Eric 

Re: Metrics via Prometheus

2020-03-05 Thread Joe Witt
Eric,

It is probably easier to use blogspot or something like that.  But if you
want to offer a guest submission I'm sure we can figure it out for the
Apache blog too.  I'm just not sure on the steps.

Thanks

On Thu, Mar 5, 2020 at 11:27 AM Eric Ladner  wrote:

> how would I submit something to the NiFi blog?
>
> On Thu, Mar 5, 2020 at 6:37 AM Eric Ladner  wrote:
>
>> Good idea.  I'll look into that today.
>>
>> On Thu, Mar 5, 2020 at 5:56 AM Paul Parker  wrote:
>>
>>> It would be great if you could share your story as a blog post.
>>>
>>> Eric Ladner  schrieb am Mi., 4. März 2020, 19:45:
>>>
 Thank you so much for your guidance.  I was able to get data flowing
 into Prometheus fairly easily once all the pieces were understood.

 Now, I just need to dig into Prometheus queries and make some Grafana
 dashboards.

 On Tue, Mar 3, 2020 at 2:54 PM Yolanda Davis 
 wrote:

> Sure not a problem!  Hopefully below thoughts can help you get started:
>
> As you may know the PrometheusReportingTask is a bit different from
> other tasks in that it actually exposes an endpoint for Prometheus to
> scrape (vs. pushing data directly to Prometheus).  When the task is 
> started
> the endpoint is created on the port you designate under “/metrics”; so 
> just
> ensure that you don’t have anything already on the port you select. If you
> want to ensure that you have a secured endpoint for Prometheus to connect,
> be sure to use a SSL Context Service (a controller service that will allow
> the reporting task to use the appropriate key/trust stores for TLS). Also
> you'll want to consider the levels at which you are reporting (Root Group,
> Process Group or All Components), especially in terms of the amount of 
> data
> you are looking to send back.  Jvm metrics can be sent as well flow
> specific metrics. Finally consider how often metrics should be refreshed 
> by
> adjusting the Scheduling Strategy in the settings tab for the task.
>
> When starting the task you should be able to go directly to the
> endpoint (without Prometheus) to confirm it’s output (e.g.
> http://locahost:9092/metrics ).  You should see a format similar to
> what Prometheus supports for it’s scraping jobs (see example
> https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-example
> )
>
> On the Prometheus side you’ll want to follow their instructions on how
> to setup a scrape configuration that  will point to the newly created
> metrics endpoint . I’d recommend checking out the first steps for help (
> https://prometheus.io/docs/introduction/first_steps/#configuring-prometheus)
> and then when you need to provide more advanced settings take a look here
> https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config.
> The key is you’ll want to define a new scrape job that looks at the NiFi
> endpoint for scraping.  To start you may want to refer to the cluster
> directly but later add the security credentials or use another method for
> discovering the endpoint.
>
> Once these configurations are in place, and Prometheus is started (or
> restarted) after a few seconds you should begin to see metrics landing 
> when
> querying in Grafana.
>
> I hope this helps!  Please let me know if you have any further
> questions.
>
> -yolanda
>
> On Tue, Mar 3, 2020 at 2:10 PM Eric Ladner 
> wrote:
>
>> Yes, exactly!   Reporting Task -> Prometheus -> Grafana for keeping
>> an eye on things running in NiFi.
>>
>> If you have any hints/tips on getting things working, I'd be grateful.
>>
>> On Tue, Mar 3, 2020 at 12:35 PM Yolanda Davis <
>> yolanda.m.da...@gmail.com> wrote:
>>
>>> Hi Eric,
>>>
>>> Were you looking to use the Prometheus Reporting Task for making
>>> metrics available for Prometheus scraping? I don't believe any
>>> documentation outside of what is in NiFi exists just yet, but I'm happy 
>>> to
>>> help answer questions you may have (I've used this task recently).
>>>
>>> -yolanda
>>>
>>> On Tue, Mar 3, 2020 at 10:51 AM Eric Ladner 
>>> wrote:
>>>
 Is there a guide to setting up Nifi and Prometheus anywhere?  The
 nar docs are a little vague.

 Thanks,

 Eric Ladner

>>>
>>>
>>> --
>>> --
>>> yolanda.m.da...@gmail.com
>>> @YolandaMDavis
>>>
>>>
>>
>> --
>> Eric Ladner
>>
>
>
> --
> --
> yolanda.m.da...@gmail.com
> @YolandaMDavis
>
>

 --
 Eric Ladner

>>>
>>
>> --
>> Eric Ladner
>>
>
>
> --
> Eric Ladner
>


Re: Metrics via Prometheus

2020-03-05 Thread Eric Ladner
how would I submit something to the NiFi blog?

On Thu, Mar 5, 2020 at 6:37 AM Eric Ladner  wrote:

> Good idea.  I'll look into that today.
>
> On Thu, Mar 5, 2020 at 5:56 AM Paul Parker  wrote:
>
>> It would be great if you could share your story as a blog post.
>>
>> Eric Ladner  schrieb am Mi., 4. März 2020, 19:45:
>>
>>> Thank you so much for your guidance.  I was able to get data flowing
>>> into Prometheus fairly easily once all the pieces were understood.
>>>
>>> Now, I just need to dig into Prometheus queries and make some Grafana
>>> dashboards.
>>>
>>> On Tue, Mar 3, 2020 at 2:54 PM Yolanda Davis 
>>> wrote:
>>>
 Sure not a problem!  Hopefully below thoughts can help you get started:

 As you may know the PrometheusReportingTask is a bit different from
 other tasks in that it actually exposes an endpoint for Prometheus to
 scrape (vs. pushing data directly to Prometheus).  When the task is started
 the endpoint is created on the port you designate under “/metrics”; so just
 ensure that you don’t have anything already on the port you select. If you
 want to ensure that you have a secured endpoint for Prometheus to connect,
 be sure to use a SSL Context Service (a controller service that will allow
 the reporting task to use the appropriate key/trust stores for TLS). Also
 you'll want to consider the levels at which you are reporting (Root Group,
 Process Group or All Components), especially in terms of the amount of data
 you are looking to send back.  Jvm metrics can be sent as well flow
 specific metrics. Finally consider how often metrics should be refreshed by
 adjusting the Scheduling Strategy in the settings tab for the task.

 When starting the task you should be able to go directly to the
 endpoint (without Prometheus) to confirm it’s output (e.g.
 http://locahost:9092/metrics ).  You should see a format similar to
 what Prometheus supports for it’s scraping jobs (see example
 https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-example
 )

 On the Prometheus side you’ll want to follow their instructions on how
 to setup a scrape configuration that  will point to the newly created
 metrics endpoint . I’d recommend checking out the first steps for help (
 https://prometheus.io/docs/introduction/first_steps/#configuring-prometheus)
 and then when you need to provide more advanced settings take a look here
 https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config.
 The key is you’ll want to define a new scrape job that looks at the NiFi
 endpoint for scraping.  To start you may want to refer to the cluster
 directly but later add the security credentials or use another method for
 discovering the endpoint.

 Once these configurations are in place, and Prometheus is started (or
 restarted) after a few seconds you should begin to see metrics landing when
 querying in Grafana.

 I hope this helps!  Please let me know if you have any further
 questions.

 -yolanda

 On Tue, Mar 3, 2020 at 2:10 PM Eric Ladner 
 wrote:

> Yes, exactly!   Reporting Task -> Prometheus -> Grafana for keeping an
> eye on things running in NiFi.
>
> If you have any hints/tips on getting things working, I'd be grateful.
>
> On Tue, Mar 3, 2020 at 12:35 PM Yolanda Davis <
> yolanda.m.da...@gmail.com> wrote:
>
>> Hi Eric,
>>
>> Were you looking to use the Prometheus Reporting Task for making
>> metrics available for Prometheus scraping? I don't believe any
>> documentation outside of what is in NiFi exists just yet, but I'm happy 
>> to
>> help answer questions you may have (I've used this task recently).
>>
>> -yolanda
>>
>> On Tue, Mar 3, 2020 at 10:51 AM Eric Ladner 
>> wrote:
>>
>>> Is there a guide to setting up Nifi and Prometheus anywhere?  The
>>> nar docs are a little vague.
>>>
>>> Thanks,
>>>
>>> Eric Ladner
>>>
>>
>>
>> --
>> --
>> yolanda.m.da...@gmail.com
>> @YolandaMDavis
>>
>>
>
> --
> Eric Ladner
>


 --
 --
 yolanda.m.da...@gmail.com
 @YolandaMDavis


>>>
>>> --
>>> Eric Ladner
>>>
>>
>
> --
> Eric Ladner
>


-- 
Eric Ladner


JSON conversion issue?

2020-03-05 Thread Eric Ladner
I have a flow that's reading out of a database that has some text fields
with UTF characters in them (windows right double quote, for example).

The data in the table is clearly the right character (UTF-8 sequence e2 80
9d) which should be escaped as \u201d but for some reason is being stored
as \u001d (which is some ASCII range control character).

I can't figure out where the problem is being introduced, but the JDBC
driver, the database, and the destination system (MarkLogic) all support
UTF.

Is this a NiFi bug?  Any clues on narrowing down where this is being
introduced would be helpful.

Thanks,

Eric Ladner


Re: Metrics via Prometheus

2020-03-05 Thread Eric Ladner
Good idea.  I'll look into that today.

On Thu, Mar 5, 2020 at 5:56 AM Paul Parker  wrote:

> It would be great if you could share your story as a blog post.
>
> Eric Ladner  schrieb am Mi., 4. März 2020, 19:45:
>
>> Thank you so much for your guidance.  I was able to get data flowing into
>> Prometheus fairly easily once all the pieces were understood.
>>
>> Now, I just need to dig into Prometheus queries and make some Grafana
>> dashboards.
>>
>> On Tue, Mar 3, 2020 at 2:54 PM Yolanda Davis 
>> wrote:
>>
>>> Sure not a problem!  Hopefully below thoughts can help you get started:
>>>
>>> As you may know the PrometheusReportingTask is a bit different from
>>> other tasks in that it actually exposes an endpoint for Prometheus to
>>> scrape (vs. pushing data directly to Prometheus).  When the task is started
>>> the endpoint is created on the port you designate under “/metrics”; so just
>>> ensure that you don’t have anything already on the port you select. If you
>>> want to ensure that you have a secured endpoint for Prometheus to connect,
>>> be sure to use a SSL Context Service (a controller service that will allow
>>> the reporting task to use the appropriate key/trust stores for TLS). Also
>>> you'll want to consider the levels at which you are reporting (Root Group,
>>> Process Group or All Components), especially in terms of the amount of data
>>> you are looking to send back.  Jvm metrics can be sent as well flow
>>> specific metrics. Finally consider how often metrics should be refreshed by
>>> adjusting the Scheduling Strategy in the settings tab for the task.
>>>
>>> When starting the task you should be able to go directly to the endpoint
>>> (without Prometheus) to confirm it’s output (e.g.
>>> http://locahost:9092/metrics ).  You should see a format similar to
>>> what Prometheus supports for it’s scraping jobs (see example
>>> https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-example
>>> )
>>>
>>> On the Prometheus side you’ll want to follow their instructions on how
>>> to setup a scrape configuration that  will point to the newly created
>>> metrics endpoint . I’d recommend checking out the first steps for help (
>>> https://prometheus.io/docs/introduction/first_steps/#configuring-prometheus)
>>> and then when you need to provide more advanced settings take a look here
>>> https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config.
>>> The key is you’ll want to define a new scrape job that looks at the NiFi
>>> endpoint for scraping.  To start you may want to refer to the cluster
>>> directly but later add the security credentials or use another method for
>>> discovering the endpoint.
>>>
>>> Once these configurations are in place, and Prometheus is started (or
>>> restarted) after a few seconds you should begin to see metrics landing when
>>> querying in Grafana.
>>>
>>> I hope this helps!  Please let me know if you have any further questions.
>>>
>>> -yolanda
>>>
>>> On Tue, Mar 3, 2020 at 2:10 PM Eric Ladner 
>>> wrote:
>>>
 Yes, exactly!   Reporting Task -> Prometheus -> Grafana for keeping an
 eye on things running in NiFi.

 If you have any hints/tips on getting things working, I'd be grateful.

 On Tue, Mar 3, 2020 at 12:35 PM Yolanda Davis <
 yolanda.m.da...@gmail.com> wrote:

> Hi Eric,
>
> Were you looking to use the Prometheus Reporting Task for making
> metrics available for Prometheus scraping? I don't believe any
> documentation outside of what is in NiFi exists just yet, but I'm happy to
> help answer questions you may have (I've used this task recently).
>
> -yolanda
>
> On Tue, Mar 3, 2020 at 10:51 AM Eric Ladner 
> wrote:
>
>> Is there a guide to setting up Nifi and Prometheus anywhere?  The nar
>> docs are a little vague.
>>
>> Thanks,
>>
>> Eric Ladner
>>
>
>
> --
> --
> yolanda.m.da...@gmail.com
> @YolandaMDavis
>
>

 --
 Eric Ladner

>>>
>>>
>>> --
>>> --
>>> yolanda.m.da...@gmail.com
>>> @YolandaMDavis
>>>
>>>
>>
>> --
>> Eric Ladner
>>
>

-- 
Eric Ladner


Re: Metrics via Prometheus

2020-03-05 Thread Paul Parker
It would be great if you could share your story as a blog post.

Eric Ladner  schrieb am Mi., 4. März 2020, 19:45:

> Thank you so much for your guidance.  I was able to get data flowing into
> Prometheus fairly easily once all the pieces were understood.
>
> Now, I just need to dig into Prometheus queries and make some Grafana
> dashboards.
>
> On Tue, Mar 3, 2020 at 2:54 PM Yolanda Davis 
> wrote:
>
>> Sure not a problem!  Hopefully below thoughts can help you get started:
>>
>> As you may know the PrometheusReportingTask is a bit different from other
>> tasks in that it actually exposes an endpoint for Prometheus to scrape (vs.
>> pushing data directly to Prometheus).  When the task is started the
>> endpoint is created on the port you designate under “/metrics”; so just
>> ensure that you don’t have anything already on the port you select. If you
>> want to ensure that you have a secured endpoint for Prometheus to connect,
>> be sure to use a SSL Context Service (a controller service that will allow
>> the reporting task to use the appropriate key/trust stores for TLS). Also
>> you'll want to consider the levels at which you are reporting (Root Group,
>> Process Group or All Components), especially in terms of the amount of data
>> you are looking to send back.  Jvm metrics can be sent as well flow
>> specific metrics. Finally consider how often metrics should be refreshed by
>> adjusting the Scheduling Strategy in the settings tab for the task.
>>
>> When starting the task you should be able to go directly to the endpoint
>> (without Prometheus) to confirm it’s output (e.g.
>> http://locahost:9092/metrics ).  You should see a format similar to what
>> Prometheus supports for it’s scraping jobs (see example
>> https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-example
>> )
>>
>> On the Prometheus side you’ll want to follow their instructions on how to
>> setup a scrape configuration that  will point to the newly created metrics
>> endpoint . I’d recommend checking out the first steps for help (
>> https://prometheus.io/docs/introduction/first_steps/#configuring-prometheus)
>> and then when you need to provide more advanced settings take a look here
>> https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config.
>> The key is you’ll want to define a new scrape job that looks at the NiFi
>> endpoint for scraping.  To start you may want to refer to the cluster
>> directly but later add the security credentials or use another method for
>> discovering the endpoint.
>>
>> Once these configurations are in place, and Prometheus is started (or
>> restarted) after a few seconds you should begin to see metrics landing when
>> querying in Grafana.
>>
>> I hope this helps!  Please let me know if you have any further questions.
>>
>> -yolanda
>>
>> On Tue, Mar 3, 2020 at 2:10 PM Eric Ladner  wrote:
>>
>>> Yes, exactly!   Reporting Task -> Prometheus -> Grafana for keeping an
>>> eye on things running in NiFi.
>>>
>>> If you have any hints/tips on getting things working, I'd be grateful.
>>>
>>> On Tue, Mar 3, 2020 at 12:35 PM Yolanda Davis 
>>> wrote:
>>>
 Hi Eric,

 Were you looking to use the Prometheus Reporting Task for making
 metrics available for Prometheus scraping? I don't believe any
 documentation outside of what is in NiFi exists just yet, but I'm happy to
 help answer questions you may have (I've used this task recently).

 -yolanda

 On Tue, Mar 3, 2020 at 10:51 AM Eric Ladner 
 wrote:

> Is there a guide to setting up Nifi and Prometheus anywhere?  The nar
> docs are a little vague.
>
> Thanks,
>
> Eric Ladner
>


 --
 --
 yolanda.m.da...@gmail.com
 @YolandaMDavis


>>>
>>> --
>>> Eric Ladner
>>>
>>
>>
>> --
>> --
>> yolanda.m.da...@gmail.com
>> @YolandaMDavis
>>
>>
>
> --
> Eric Ladner
>


AW: Logging/Monitoring of Invalid Processors at NiFi Startup

2020-03-05 Thread Dobbernack, Harald (Key-Work)
Hi Pierre,

thank you for these ideas! I'm looking forward to trying that out.

Merci beaucoup!
Harald



Von: Pierre Villard 
Gesendet: Mittwoch, 4. März 2020 19:46
An: users@nifi.apache.org
Betreff: Re: Logging/Monitoring of Invalid Processors at NiFi Startup

Hi,

You could use the SiteToSiteStatusReportingTask and leverage the "runStatus" 
field to list all the components (processors, controller services, etc) that 
are invalid. You could even use the QueryNiFiReportingTask to directly filter 
only the invalid processors and sink this information somewhere.

Hope this helps,
Pierre

Le mer. 4 mars 2020 à 16:44,  a écrit :
With logback.xml you can finetune logmessages, but don't ask me the details __.

Cheers Josef


On 04.03.20, 16:35, "Dobbernack, Harald (Key-Work)" 
 wrote:

Hi Josef,

thank you for your input! We planned on merging/aggregating the errors into 
one mail an hour (so the length of the mail would only be caused by the 
distinct number of error types in the timeframe), but of course I'll check with 
my splunk colleagues!  We still have the problem though that the invalid 
processors are not logged into the nifi-app.log - or is there a way to enable 
this?

Thank you,
Harald



Von: mailto:josef.zahn...@swisscom.com 
Gesendet: Mittwoch, 4. März 2020 16:19
An: mailto:users@nifi.apache.org
Betreff: Re: Logging/Monitoring of Invalid Processors at NiFi Startup

Hi Harald,

I can tell you only what we do, we are sending the whole nifi-app.log to 
splunk and scan there specific for alarms/warnings. We don’t use e-mail 
notification as it doesn’t help too much. In the nifi-app.log you would see as 
well startup issues, so we just focus on that instead.

In my eyes e-mail isn’t the right medium to alert/monitor, eg. If you have 
massiv issues it would flood your e-mail account completely with warnings – and 
I don’t think that you want that.

Cheers Josef


From: "Dobbernack, Harald (Key-Work)" 

Reply to: "mailto:mailto:users@nifi.apache.org; 

Date: Wednesday, 4 March 2020 at 14:11
To: "mailto:mailto:users@nifi.apache.org; 

Subject: Logging/Monitoring of Invalid Processors at NiFi Startup

Our standalone Nifi 1.11.1 on Debian 10.2 will on service startup not throw 
an Error or Warning into the NiFi log if it deams a processor invalid, for 
example if a samba mount is not available and the listfile or getfile 
processors cannot reach the mount. If, on the other hand, the processors are 
running and the connection to the mount gets lost then we will see Error 
entrances in the NiFi App log, which we then can put to use to alert us.

We had thought to let Nifi report Errors to us via push mail via taillog 
processor on it’s own log, but in case of unreachable mount at service startup 
it wouldn’t be able to alert us that something is wrong.

Is it possible to log invalid processors at service startup? Or how Do you 
monitor or report failures of this type?

Thank you,
Harald
--


 Harald Dobbernack
Key-Work Consulting GmbH | Kriegsstr. 100 | 76133 | Karlsruhe | Germany | 
https://www.key-work.de | 
Datenschutz
Fon: +49-721-78203-264 | E-Mail: harald.dobbern...@key-work.de | Fax: 
+49-721-78203-10

Key-Work Consulting GmbH, Karlsruhe, HRB 108695, HRG Mannheim
Geschäftsführer: Andreas Stappert, Tobin Wotring