[ 
https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319195#comment-15319195
 ] 

Josh Elser commented on PHOENIX-2940:
-------------------------------------

bq. How about if we make the size of the stats cache configurable through a new 
QueryServices config parameter?

Sure! I totally spaced on that not being exposed already.

bq. Might be simplest to populate the cache on demand when the stats are asked 
for instead of through a timer task. 

Makes sense. Can switch over to using a Guava LoadingCache which should make 
that logic pretty simple. I was lamenting the lack of runtime insight we would 
have. I tried to add some logging as a short-term way to inspect this in the 
future, but that's quite less than ideal. I need to put some more thought into 
what more we could do to track information in clients (including what already 
may exist?).

bq. If the client-side cache is too small, then the client might be querying 
too often for the stats, but I'm not sure what the best way to prevent this 
would be. Ideally, we really want to have a propensity to cache the stats that 
are asked for most frequently. If we go the timer route, and the client-side 
cache is too small, then it just becomes less and less likely that the stats 
are in the cache when we need them (essentially disabling stats). It'd help if 
we had PHOENIX-2675 so that tables that don't need stats wouldn't fill up the 
cache.

Agreed, limiting the contents of the cache based on size is rather difficult to 
work around but important so that we don't blow out the client's heap. The 
normal eviction stuff will work pretty well (evicting the least-recently used 
elements first), but, like you point out, that doesn't help if the cache is 
woefully small to begin with. I could think up some tricky solution which would 
warn the user if we were continually re-fetching the stats, but that would 
still require human interaction which is "meh".

bq. I think it's ok to remove the PTable.getTableStats() method, as an old 
client would just get no table stats which we handle correctly today.

Ok. I was erring on the side of not breaking any old code, but that was just a 
gut-reaction.

> Remove STATS RPCs from rowlock
> ------------------------------
>
>                 Key: PHOENIX-2940
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2940
>             Project: Phoenix
>          Issue Type: Improvement
>         Environment: HDP 2.3 + Apache Phoenix 4.6.0
>            Reporter: Nick Dimiduk
>            Assignee: Josh Elser
>             Fix For: 4.9.0
>
>         Attachments: PHOENIX-2940.001.patch
>
>
> We have an unfortunate situation wherein we potentially execute many RPCs 
> while holding a row lock. This is problem is discussed in detail on the user 
> list thread ["Write path blocked by MetaDataEndpoint acquiring region 
> lock"|http://search-hadoop.com/m/9UY0h2qRaBt6Tnaz1&subj=Write+path+blocked+by+MetaDataEndpoint+acquiring+region+lock].
>  During some situations, the 
> [MetaDataEndpoint|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L492]
>  coprocessor will attempt to refresh it's view of the schema definitions and 
> statistics. This involves [taking a 
> rowlock|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2862],
>  executing a scan against the [local 
> region|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L542],
>  and then a scan against a [potentially 
> remote|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L964]
>  statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps 
> (in my case, the use of the ROW_TIMESTAMP feature, or perhaps as in 
> PHOENIX-2607). When combined with other issues (PHOENIX-2939), we end up with 
> total gridlock in our handler threads -- everyone queued behind the rowlock, 
> scanning and rescanning SYSTEM.STATS. Because this happens in the 
> MetaDataEndpoint, the means by which all clients refresh their knowledge of 
> schema, gridlock in that RS can effectively stop all forward progress on the 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to