Grr.. you'd think "Users" would have the ability to add items to the "For
Users" section on the wiki.

On Thu, Mar 5, 2020 at 10:33 AM Eric Ladner <eric.lad...@gmail.com> wrote:

> eh.. maybe I could throw it on the wiki if somebody grants write access.
>
> On Thu, Mar 5, 2020 at 10:28 AM Joe Witt <joe.w...@gmail.com> wrote:
>
>> Eric,
>>
>> It is probably easier to use blogspot or something like that.  But if you
>> want to offer a guest submission I'm sure we can figure it out for the
>> Apache blog too.  I'm just not sure on the steps.
>>
>> Thanks
>>
>> On Thu, Mar 5, 2020 at 11:27 AM Eric Ladner <eric.lad...@gmail.com>
>> wrote:
>>
>>> how would I submit something to the NiFi blog?
>>>
>>> On Thu, Mar 5, 2020 at 6:37 AM Eric Ladner <eric.lad...@gmail.com>
>>> wrote:
>>>
>>>> Good idea.  I'll look into that today.
>>>>
>>>> On Thu, Mar 5, 2020 at 5:56 AM Paul Parker <nifi.sur...@gmail.com>
>>>> wrote:
>>>>
>>>>> It would be great if you could share your story as a blog post.
>>>>>
>>>>> Eric Ladner <eric.lad...@gmail.com> schrieb am Mi., 4. März 2020,
>>>>> 19:45:
>>>>>
>>>>>> Thank you so much for your guidance.  I was able to get data flowing
>>>>>> into Prometheus fairly easily once all the pieces were understood.
>>>>>>
>>>>>> Now, I just need to dig into Prometheus queries and make some Grafana
>>>>>> dashboards.
>>>>>>
>>>>>> On Tue, Mar 3, 2020 at 2:54 PM Yolanda Davis <
>>>>>> yolanda.m.da...@gmail.com> wrote:
>>>>>>
>>>>>>> Sure not a problem!  Hopefully below thoughts can help you get
>>>>>>> started:
>>>>>>>
>>>>>>> As you may know the PrometheusReportingTask is a bit different from
>>>>>>> other tasks in that it actually exposes an endpoint for Prometheus to
>>>>>>> scrape (vs. pushing data directly to Prometheus).  When the task is 
>>>>>>> started
>>>>>>> the endpoint is created on the port you designate under “/metrics”; so 
>>>>>>> just
>>>>>>> ensure that you don’t have anything already on the port you select. If 
>>>>>>> you
>>>>>>> want to ensure that you have a secured endpoint for Prometheus to 
>>>>>>> connect,
>>>>>>> be sure to use a SSL Context Service (a controller service that will 
>>>>>>> allow
>>>>>>> the reporting task to use the appropriate key/trust stores for TLS). 
>>>>>>> Also
>>>>>>> you'll want to consider the levels at which you are reporting (Root 
>>>>>>> Group,
>>>>>>> Process Group or All Components), especially in terms of the amount of 
>>>>>>> data
>>>>>>> you are looking to send back.  Jvm metrics can be sent as well flow
>>>>>>> specific metrics. Finally consider how often metrics should be 
>>>>>>> refreshed by
>>>>>>> adjusting the Scheduling Strategy in the settings tab for the task.
>>>>>>>
>>>>>>> When starting the task you should be able to go directly to the
>>>>>>> endpoint (without Prometheus) to confirm it’s output (e.g.
>>>>>>> http://locahost:9092/metrics ).  You should see a format similar to
>>>>>>> what Prometheus supports for it’s scraping jobs (see example
>>>>>>> https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-example
>>>>>>> )
>>>>>>>
>>>>>>> On the Prometheus side you’ll want to follow their instructions on
>>>>>>> how to setup a scrape configuration that  will point to the newly 
>>>>>>> created
>>>>>>> metrics endpoint . I’d recommend checking out the first steps for help (
>>>>>>> https://prometheus.io/docs/introduction/first_steps/#configuring-prometheus)
>>>>>>> and then when you need to provide more advanced settings take a look 
>>>>>>> here
>>>>>>> https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config.
>>>>>>> The key is you’ll want to define a new scrape job that looks at the NiFi
>>>>>>> endpoint for scraping.  To start you may want to refer to the cluster
>>>>>>> directly but later add the security credentials or use another method 
>>>>>>> for
>>>>>>> discovering the endpoint.
>>>>>>>
>>>>>>> Once these configurations are in place, and Prometheus is started
>>>>>>> (or restarted) after a few seconds you should begin to see metrics 
>>>>>>> landing
>>>>>>> when querying in Grafana.
>>>>>>>
>>>>>>> I hope this helps!  Please let me know if you have any further
>>>>>>> questions.
>>>>>>>
>>>>>>> -yolanda
>>>>>>>
>>>>>>> On Tue, Mar 3, 2020 at 2:10 PM Eric Ladner <eric.lad...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Yes, exactly!   Reporting Task -> Prometheus -> Grafana for keeping
>>>>>>>> an eye on things running in NiFi.
>>>>>>>>
>>>>>>>> If you have any hints/tips on getting things working, I'd be
>>>>>>>> grateful.
>>>>>>>>
>>>>>>>> On Tue, Mar 3, 2020 at 12:35 PM Yolanda Davis <
>>>>>>>> yolanda.m.da...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Eric,
>>>>>>>>>
>>>>>>>>> Were you looking to use the Prometheus Reporting Task for making
>>>>>>>>> metrics available for Prometheus scraping? I don't believe any
>>>>>>>>> documentation outside of what is in NiFi exists just yet, but I'm 
>>>>>>>>> happy to
>>>>>>>>> help answer questions you may have (I've used this task recently).
>>>>>>>>>
>>>>>>>>> -yolanda
>>>>>>>>>
>>>>>>>>> On Tue, Mar 3, 2020 at 10:51 AM Eric Ladner <eric.lad...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Is there a guide to setting up Nifi and Prometheus anywhere?  The
>>>>>>>>>> nar docs are a little vague.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>>
>>>>>>>>>> Eric Ladner
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> --
>>>>>>>>> yolanda.m.da...@gmail.com
>>>>>>>>> @YolandaMDavis
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Eric Ladner
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> --
>>>>>>> yolanda.m.da...@gmail.com
>>>>>>> @YolandaMDavis
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Eric Ladner
>>>>>>
>>>>>
>>>>
>>>> --
>>>> Eric Ladner
>>>>
>>>
>>>
>>> --
>>> Eric Ladner
>>>
>>
>
> --
> Eric Ladner
>


-- 
Eric Ladner

Reply via email to