[ 
https://issues.apache.org/jira/browse/PHOENIX-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103877#comment-15103877
 ] 

James Taylor commented on PHOENIX-2417:
---------------------------------------

[~ankit.singhal] - yes, good point - if we get a PhoenixConnection (even on the 
server side), upgrade will be triggered (which is *not* what we want). Instead, 
the only thing you should do in MetaDataRegionObserver is delete all the rows 
from the SYSTEM.STATS table. Just use something like this 
{{e.getEnvironment().getTable(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME_BYTES));}}
 to get an HTableInterface and then issue a scan over the entire table (with a 
FirstKeyOnlyFilter) and issue a Delete for each row.

Then remove this upgrade code from ConnectionQueryServicesImpl:
{code}
                HBaseAdmin admin = null;
                try {
                        admin = getAdmin();
                        
admin.disableTable(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME_BYTES);
                        try {
                                
admin.deleteTable(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME_BYTES);
                        } catch (org.apache.hadoop.hbase.TableNotFoundException 
e) {
                                logger.debug("Stats table was not found during 
upgrade!!");
                        }
                } finally {
                        if (admin != null)
                                admin.close();
                }
{code}

> Compress memory used by row key byte[] of guideposts
> ----------------------------------------------------
>
>                 Key: PHOENIX-2417
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2417
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: James Taylor
>            Assignee: Ankit Singhal
>             Fix For: 4.7.0
>
>         Attachments: PHOENIX-2417.patch, PHOENIX-2417_encoder.diff, 
> PHOENIX-2417_v2_wip.patch, StatsUpgrade_wip.patch
>
>
> We've found that smaller guideposts are better in terms of minimizing any 
> increase in latency for point scans. However, this increases the amount of 
> memory significantly when caching the guideposts on the client. Guidepost are 
> equidistant row keys in the form of raw byte[] which are likely to have a 
> large percentage of their leading bytes in common (as they're stored in 
> sorted order. We should use a simple compression technique to mitigate this. 
> I noticed that Apache Parquet has a run length encoding - perhaps we can use 
> that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to