[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383653#comment-15383653 ]
stack commented on HBASE-16095: ------------------------------- This is another attempt at changing hbase to fix broken phoenix secondary indices. Do we have to do this? As per [~ghelmling] comment, can't we keep phoenix stuff up in phoenix? Secondary indices via transaction are almost here. Isn't that the proper fix rather than adding new pools to hbase (we don't need more pools), etc. Why we need this change if configuring below could address deadlock? bq. ....Possible distributed deadlocks can be prevented via custom RpcScheduler + RpcController configuration via HBASE-11048 and PHOENIX-938. Regards the below... bq. However, region opening also has the same deadlock situation, because data region open has to replay the WAL edits to the index regions. This sort of dependence amongst regions -- i.e. the index has to be online before data region can come on line -- is not supported in hbase; what happens if server carrying index region crashes... and other scenarios, etc. Has it been worked through? If so, where can I read about it? bq. This maybe useful for other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they want some specific tables to become online faster. We have a mechanism for onlining important regions already that has loads of holes in it (meta, namespace, etc.). The new AMv2 will go a long ways toward plugging a bunch of them. In this issue we are proposing a new means of doing a similar thing but on an even shakier foundation. Seems dodgy [~enis], brittle as [~ghelmling] says. Phoenix users will have to ensure they configure all index tables as PRIORITY (making index tables 'high priority' is a little unexpected)? For preexisting tables they'll have to go through and enable this everywhere? How you going to message that? > Add priority to TableDescriptor and priority region open thread pool > -------------------------------------------------------------------- > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug > Reporter: Enis Soztutar > Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21 > > Attachments: HBASE-16095-0.98.patch, HBASE-16095-0.98.patch, > hbase-16095_v0.patch, hbase-16095_v1.patch, hbase-16095_v2.patch, > hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)