[
https://issues.apache.org/jira/browse/PHOENIX-1819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14569806#comment-14569806
]
James Taylor commented on PHOENIX-1819:
---------------------------------------
Patch looks great, [~samarthjain]. I think it's ok to rely on the metric enum
name being stable IMO. Not sure you want to take the memory hit of using a Map
versus a List. Also, maybe a List<ReadOnlyMetric> instead of a
Pair<String,Long> where ReadOnlyMetric only exposes the getName(),
getDescription(), and getValue() methods (you could derive Metric from this
perhaps). Isn't it expected that the client will iterate through all the
metrics and dump them to a log line?
That's good that your salting your tables for the metrics tests to ensure
you've got more than one region involved. Are you doing that for all your per
phoenix statement tests? One minor addition I'd recommend is to run the
upsert/select (to the same table) and delete tests both with and without auto
commit being on. Also, one test with auto commit off that runs multiple
statements prior to committing would be a good addition. Did you test with and
without the publishOnClose flag as true/false?
One other question - how will index maintenance/usage be reflected in the
metrics? Will the index just show up as another table in the read/write metrics
per request? What about sequences? Any metrics around those?
Other than that, the only question is if/how this impacts perf under heavy
load. I can see that the way you've done it (using AtomicLongs) makes it easier
to encapsulate the metric logic. I still think you could work out not needing
any synchronization for the counters, as there are clear places in the code
that merge the results of the parallel threads back together.
> Report resource consumption per phoenix statement
> -------------------------------------------------
>
> Key: PHOENIX-1819
> URL: https://issues.apache.org/jira/browse/PHOENIX-1819
> Project: Phoenix
> Issue Type: New Feature
> Reporter: Samarth Jain
> Assignee: Samarth Jain
> Fix For: 5.0.0, 4.4.1
>
> Attachments: PHOENIX-1819-rebased.patch, PHOENIX-1819.patch
>
>
> In order to get insight into what phoenix is doing and how much it is doing
> per request, it would be ideal to get a single log line per phoenix request.
> The log line could contain request level metrics like:
> 1) Number of spool files created.
> 2) Number of parallel scans.
> 3) Number of serial scans.
> 4) Query failed - boolean
> 5) Query time out - boolean
> 6) Query time.
> 7) Mutation time.
> 8) Mutation size in bytes.
> 9) Number of mutations.
> 10) Bytes allocated by the memory manager.
> 11) Time spent by threads waiting for the memory to be allocated.
> 12) Number of tasks submitted to the pool.
> 13) Number of tasks rejected.
> 14) Time spent by tasks in the queue.
> 15) Time taken by tasks to complete - from construction to execution
> completion.
> 16) Time taken by tasks to execute.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)