[ 
https://issues.apache.org/jira/browse/HADOOP-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Duxbury updated HADOOP-2443:
----------------------------------

    Attachment: 2443-v5.patch

The latest version of the patch removes getTableServers and related methods 
entirely and switches HTable$ClientScanner to use getRegionLocation.

Unfortunately, I now have TestTableIndex and TestTableMapReduce failing all the 
time. The error seem to be related to scanners acting strangely with rows in 
the wrong regions, but I can't be sure of what's going on yet.

> [hbase] Keep lazy cache of regions in client rather than an 'authoritative' 
> list
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-2443
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2443
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: contrib/hbase
>            Reporter: stack
>            Assignee: Bryan Duxbury
>             Fix For: 0.16.0
>
>         Attachments: 2443-v3.patch, 2443-v4.patch, 2443-v5.patch
>
>
> Currently, when the client gets a NotServingRegionException -- usually 
> because its in middle of being split or there has been a regionserver crash 
> and region is being moved elsewhere -- the client does a complete refresh of 
> its cache of region locations for a table.
> Chatting with Jim about a Paul Saab upload issue from Saturday night, when 
> tables are big comprised of regions that are splitting fast (because of bulk 
> upload), its unlikely a client will ever be able to obtain a stable list of 
> all region locations.  Given that any update or scan requires that the list 
> of all regions be in place before it proceeds, this can get in the way of the 
> client succeeding when the cluster is under load.
> Chatting, we figure that it better the client holds a lazy region cache: on 
> NSRE, figure out where that region has gone only and update the client-side 
> cache for that entry only rather than throw out all we know of a table every 
> time.
> Hopefully this will fix the issue PS was experiencing where during intense 
> upload, he was unable to get/scan/hql the same table.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to