[ 
https://issues.apache.org/jira/browse/LOG4J2-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203734#comment-17203734
 ] 

Ralph Goers commented on LOG4J2-2937:
-------------------------------------

I have no idea how splunk is implemented as it is a proprietary product, but 
since its primary purpose is as a logging query engine and logs are generally 
in time sequence I would imaging it has that capability built in. You can 
google Splun and look at its command reference for all the various things you 
can do like timechart and streamstats (both of which I use heavily) and see how 
powerful it is.

Yes, Splunk polls once per minute. If it ever goes down (a rarity) you would 
have a gap.  Yes, that is true about granularity. I have set splunk to poll 
once per minute. The overhead of that is miniscule. 

Yes, to try to do more would impact the application too much. Even products 
like New Relic or Datadog try to minimize the amount of work done in 
application threads. They have separate threads where they push data to their 
central servers. They may do more work on the threads to embellish the data.

> Provide counters to measure log rate
> ------------------------------------
>
>                 Key: LOG4J2-2937
>                 URL: https://issues.apache.org/jira/browse/LOG4J2-2937
>             Project: Log4j 2
>          Issue Type: New Feature
>            Reporter: Dennys Fredericci
>            Priority: Major
>         Attachments: image-2020-09-28-21-10-13-850.png
>
>
> As a Log4j API user will be really nice to have a way to get the number of 
> log calls for each level without any instrumentation or bytecode 
> manipulation, something native from log4j API.
> Once this interface is implemented this can be exposed through JMX or used by 
> other libraries to send the log rate to monitoring systems such as Datadog, 
> NewRelic, Dynatrace, etc.  :)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to