[jira] [Assigned] (HBASE-20312) CCSMap: A faster, GC-friendly, less memory Concurrent Map for memstore

2018-03-29 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen reassigned HBASE-20312:


Assignee: Chance Li

> CCSMap: A faster, GC-friendly, less memory Concurrent Map for memstore
> --
>
> Key: HBASE-20312
> URL: https://issues.apache.org/jira/browse/HBASE-20312
> Project: HBase
>  Issue Type: New Feature
>  Components: regionserver
>Reporter: Xiang Wang
>Assignee: Chance Li
>Priority: Minor
> Attachments: ccsmap-branch-1.1.patch, jira1.png, jira2.png, jira3.png
>
>
> Now hbase use ConcurrentSkipListMap as memstore's data structure.
> Although MemSLAB reduces memory fragment brought by key-value pairs.
> Hundred of millions key-value pairs still make young generation 
> garbage-collection(gc) stop time long.
>  
> These are 2 gc problems of ConcurrentSkipListMap:
> 1. HBase needs 3 objects to store one key-value on expectation. One 
> Index(skiplist's average node height is 1), one Node, and one KeyValue. Too 
> many objects are created for memstore.
> 2. Recent inserted KeyValue and its map structure(Index, Node) are assigned 
> on young generation.The card table (for CMS gc algorithm) or RSet(for G1 gc 
> algorithm) will change frequently on high writing throughput, which makes YGC 
> slow.
>  
> We devleoped a new skip-list map called CompactdConcurrentSkipListMap(CCSMap 
> for short),
> which provides similary features like ConcurrentSkipListMap but get rid of 
> Objects for every key-value pairs.
> CCSMap's memory structure is like this picture:
> !jira1.png!
>  
> One CCSMap consists of a certain number of Chunks. One Chunk consists of a 
> certain number of nodes. One node is corresspding one element. This element's 
> all information and its key-value is encoded on a continuous memory segment 
> without any objects.
> Features:
> 1. all insert,update,delete operations is lock-free on CCSMap.
> 2. Consume less memory, it brings 40% memory saving for 50Byte key-value.
> 3. Faster on small key-value because of better cacheline usage. 20~30% better 
> read/write troughput than ConcurrentSkipListMap for 50Byte key-value.
> CCSMap do not support recyle space when deleting element. But it doesn't 
> matter for hbase because of region flush.
> CCSMap has been running on Alibaba's hbase clusters over 17 months, it cuts 
> down YGC time significantly. here are 2 graph of before and after.
> !jira2.png!
> !jira3.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20312) CCSMap: A faster, GC-friendly, less memory Concurrent Map for memstore

2018-03-29 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418741#comment-16418741
 ] 

chunhui shen commented on HBASE-20312:
--

CCSMap is a significant improvement for our(Alibaba) online applications with 
small kv scenarios,  it brings obvious benefits both on GC and throughput.
It has been the default option for a long time, I think anyone interested in 
this could merge the patch into their branch to take a quick test.
Thanks for any feedback and suggestion.

> CCSMap: A faster, GC-friendly, less memory Concurrent Map for memstore
> --
>
> Key: HBASE-20312
> URL: https://issues.apache.org/jira/browse/HBASE-20312
> Project: HBase
>  Issue Type: New Feature
>  Components: regionserver
>Reporter: Xiang Wang
>Priority: Minor
> Attachments: ccsmap-branch-1.1.patch, jira1.png, jira2.png, jira3.png
>
>
> Now hbase use ConcurrentSkipListMap as memstore's data structure.
> Although MemSLAB reduces memory fragment brought by key-value pairs.
> Hundred of millions key-value pairs still make young generation 
> garbage-collection(gc) stop time long.
>  
> These are 2 gc problems of ConcurrentSkipListMap:
> 1. HBase needs 3 objects to store one key-value on expectation. One 
> Index(skiplist's average node height is 1), one Node, and one KeyValue. Too 
> many objects are created for memstore.
> 2. Recent inserted KeyValue and its map structure(Index, Node) are assigned 
> on young generation.The card table (for CMS gc algorithm) or RSet(for G1 gc 
> algorithm) will change frequently on high writing throughput, which makes YGC 
> slow.
>  
> We devleoped a new skip-list map called CompactdConcurrentSkipListMap(CCSMap 
> for short),
> which provides similary features like ConcurrentSkipListMap but get rid of 
> Objects for every key-value pairs.
> CCSMap's memory structure is like this picture:
> !jira1.png!
>  
> One CCSMap consists of a certain number of Chunks. One Chunk consists of a 
> certain number of nodes. One node is corresspding one element. This element's 
> all information and its key-value is encoded on a continuous memory segment 
> without any objects.
> Features:
> 1. all insert,update,delete operations is lock-free on CCSMap.
> 2. Consume less memory, it brings 40% memory saving for 50Byte key-value.
> 3. Faster on small key-value because of better cacheline usage. 20~30% better 
> read/write troughput than ConcurrentSkipListMap for 50Byte key-value.
> CCSMap do not support recyle space when deleting element. But it doesn't 
> matter for hbase because of region flush.
> CCSMap has been running on Alibaba's hbase clusters over 17 months, it cuts 
> down YGC time significantly. here are 2 graph of before and after.
> !jira2.png!
> !jira3.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19064) Synchronous replication for HBase

2017-10-22 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16214595#comment-16214595
 ] 

chunhui shen edited comment on HBASE-19064 at 10/23/17 3:38 AM:


It is an useful feature if we need strong consistency with only two datacenters.
Looking forward to this on 3.0.(y)



was (Author: zjushch):
It is an useful feature if we want strong consistency with only two datacenters.
Looking forward to this on 3.0.(y)


> Synchronous replication for HBase
> -
>
> Key: HBASE-19064
> URL: https://issues.apache.org/jira/browse/HBASE-19064
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
>
> The guys from Alibaba made a presentation on HBaseCon Asia about the 
> synchronous replication for HBase. We(Xiaomi) think this is a very useful 
> feature for HBase so we want to bring it into the community version.
> This is a big feature so we plan to do it in a feature branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19064) Synchronous replication for HBase

2017-10-22 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16214595#comment-16214595
 ] 

chunhui shen commented on HBASE-19064:
--

It is an useful feature if we want strong consistency with only two datacenters.
Looking forward to this on 3.0.(y)


> Synchronous replication for HBase
> -
>
> Key: HBASE-19064
> URL: https://issues.apache.org/jira/browse/HBASE-19064
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
>
> The guys from Alibaba made a presentation on HBaseCon Asia about the 
> synchronous replication for HBase. We(Xiaomi) think this is a very useful 
> feature for HBase so we want to bring it into the community version.
> This is a big feature so we plan to do it in a feature branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18002) Investigate why bucket cache filling up in file mode in an exisitng file is slower

2017-07-05 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16075868#comment-16075868
 ] 

chunhui shen commented on HBASE-18002:
--

Seems fine,  +1 for the patch

> Investigate why bucket cache filling up in file mode in an exisitng file  is 
> slower
> ---
>
> Key: HBASE-18002
> URL: https://issues.apache.org/jira/browse/HBASE-18002
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-18002_1.patch, HBASE-18002_1.patch, 
> HBASE-18002.patch
>
>
> This issue was observed when we recently did some tests with SSD based bucket 
> cache. Similar thing was also reported by @stack and [~danielpol] while doing 
> some of these bucket cache related testing.
> When we try to preload a bucket cache (in file mode) with a new file the 
> bucket cache fills up quite faster and there not much 'failedBlockAdditions'. 
> But when the same bucket cache is filled up with a preexisitng file ( that 
> had already some entries filled up) this time it has more 
> 'failedBlockAdditions' and the cache does not fill up faster. Investigate why 
> this happens. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15691) Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1

2017-06-20 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16056865#comment-16056865
 ] 

chunhui shen commented on HBASE-15691:
--

{code:java}
+public synchronized void instantiateBucket(Bucket b) {
+private synchronized void removeBucket(Bucket b) {
+public synchronized IndexStatistics statistics() {
{code}
The  synchronized methods are all not on the client-read-path, thus there 
should be no perf implications.
I haven't found any possibility about deadlock, because won't try to fetch any 
lock inside these methods and their child methods.(It's able to read all the 
process inside the methods now :D)


+1 on the patch


> Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to 
> branch-1
> -
>
> Key: HBASE-15691
> URL: https://issues.apache.org/jira/browse/HBASE-15691
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.3.0
>Reporter: Andrew Purtell
>Assignee: Stephen Yuan Jiang
> Fix For: 1.3.2, 1.4.1, 1.5.0, 1.2.7
>
> Attachments: HBASE-15691-branch-1.patch, HBASE-15691.v2-branch-1.patch
>
>
> HBASE-10205 solves the following problem:
> "
> The BucketCache WriterThread calls BucketCache.freeSpace() upon draining the 
> RAM queue containing entries to be cached. freeSpace() in turn calls 
> BucketSizeInfo.statistics() through BucketAllocator.getIndexStatistics(), 
> which iterates over 'bucketList'. At the same time another WriterThread might 
> call BucketAllocator.allocateBlock(), which may call 
> BucketSizeInfo.allocateBlock(), add a bucket to 'bucketList' and consequently 
> cause a ConcurrentModificationException. Calls to 
> BucketAllocator.allocateBlock() are synchronized, but calls to 
> BucketAllocator.getIndexStatistics() are not, which allows this race to occur.
> "
> However, for some unknown reason, HBASE-10205 was only committed to master 
> (2.0 and beyond) and 0.98 branches only. To preserve continuity we should 
> commit it to branch-1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-03-24 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15940586#comment-15940586
 ] 

chunhui shen commented on HBASE-15314:
--

[~zyork] 
Could you help to backport it?  The patch should be nearly same  with trunk's.
I'm not convenient these days.

Thank you very much!

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: chunhui shen
> Fix For: 2.0
>
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch, HBASE-15314-v5.patch, 
> HBASE-15314-v6.patch, HBASE-15314-v7.patch, HBASE-15314-v8.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.
> Usage (Setting the following configurations in hbase-site.xml):
> {quote}
> 
>   hbase.bucketcache.ioengine
>   
> files:/mnt/disk1/bucketcache,/mnt/disk2/bucketcache,/mnt/disk3/bucketcache,/mnt/disk4/bucketcache
> 
> 
>   hbase.bucketcache.size
>   1048576
> 
> {quote}
> The above setting means the total capacity of cache is 1048576MB(1TB), each 
> file length will be set to 0.25TB.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15314) Allow more than one backing file in bucketcache

2017-03-16 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-15314:
-
Description: 
Allow bucketcache use more than just one backing file: e.g. chassis has more 
than one SSD in it.

Usage (Setting the following configurations in hbase-site.xml):
{quote}

  hbase.bucketcache.ioengine
  
files:/mnt/disk1/bucketcache,/mnt/disk2/bucketcache,/mnt/disk3/bucketcache,/mnt/disk4/bucketcache


  hbase.bucketcache.size
  1048576

{quote}
The above setting means the total capacity of cache is 1048576MB(1TB), each 
file length will be set to 0.25TB.


  was:
Allow bucketcache use more than just one backing file: e.g. chassis has more 
than one SSD in it.

Usage:
{quote}
Setting the following configurations in hbase-site.xml:

  hbase.bucketcache.ioengine
  
files:/mnt/disk1/bucketcache,/mnt/disk2/bucketcache,/mnt/disk3/bucketcache,/mnt/disk4/bucketcache


  hbase.bucketcache.size
  1048576

{quote}
The above setting means the total capacity of cache is 1048576MB(1TB), each 
file length will be set to 0.25TB.



> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: chunhui shen
> Fix For: 2.0
>
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch, HBASE-15314-v5.patch, 
> HBASE-15314-v6.patch, HBASE-15314-v7.patch, HBASE-15314-v8.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.
> Usage (Setting the following configurations in hbase-site.xml):
> {quote}
> 
>   hbase.bucketcache.ioengine
>   
> files:/mnt/disk1/bucketcache,/mnt/disk2/bucketcache,/mnt/disk3/bucketcache,/mnt/disk4/bucketcache
> 
> 
>   hbase.bucketcache.size
>   1048576
> 
> {quote}
> The above setting means the total capacity of cache is 1048576MB(1TB), each 
> file length will be set to 0.25TB.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15314) Allow more than one backing file in bucketcache

2017-03-16 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-15314:
-
Description: 
Allow bucketcache use more than just one backing file: e.g. chassis has more 
than one SSD in it.

Usage:
{quote}
Setting the following configurations in hbase-site.xml:

  hbase.bucketcache.ioengine
  
files:/mnt/disk1/bucketcache,/mnt/disk2/bucketcache,/mnt/disk3/bucketcache,/mnt/disk4/bucketcache


  hbase.bucketcache.size
  1048576

{quote}
The above setting means the total capacity of cache is 1048576MB(1TB), each 
file length will be set to 0.25TB.


  was:Allow bucketcache use more than just one backing file: e.g. chassis has 
more than one SSD in it.


> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: chunhui shen
> Fix For: 2.0
>
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch, HBASE-15314-v5.patch, 
> HBASE-15314-v6.patch, HBASE-15314-v7.patch, HBASE-15314-v8.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.
> Usage:
> {quote}
> Setting the following configurations in hbase-site.xml:
> 
>   hbase.bucketcache.ioengine
>   
> files:/mnt/disk1/bucketcache,/mnt/disk2/bucketcache,/mnt/disk3/bucketcache,/mnt/disk4/bucketcache
> 
> 
>   hbase.bucketcache.size
>   1048576
> 
> {quote}
> The above setting means the total capacity of cache is 1048576MB(1TB), each 
> file length will be set to 0.25TB.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-03-15 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927306#comment-15927306
 ] 

chunhui shen commented on HBASE-15314:
--

[~ram_krish]
Please help to commit it,  thanks   :)

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch, HBASE-15314-v5.patch, 
> HBASE-15314-v6.patch, HBASE-15314-v7.patch, HBASE-15314-v8.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15314) Allow more than one backing file in bucketcache

2017-03-13 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-15314:
-
Attachment: HBASE-15314-v8.patch

Update the patch as rb comment

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch, HBASE-15314-v5.patch, 
> HBASE-15314-v6.patch, HBASE-15314-v7.patch, HBASE-15314-v8.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17757) Unify blocksize after encoding to decrease memory fragment

2017-03-08 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15902318#comment-15902318
 ] 

chunhui shen commented on HBASE-17757:
--

bq.We have a tool to get a cache summery online
The tool seems useful for this feature,  make the patch in new issue?

> Unify blocksize after encoding to decrease memory fragment 
> ---
>
> Key: HBASE-17757
> URL: https://issues.apache.org/jira/browse/HBASE-17757
> Project: HBase
>  Issue Type: New Feature
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-17757.patch
>
>
> Usually, we store encoded block(uncompressed) in blockcache/bucketCache. 
> Though we have set the blocksize, after encoding, blocksize is varied. Varied 
> blocksize will cause memory fragment problem, which will result in more FGC 
> finally.In order to relief the memory fragment, This issue adjusts the 
> encoded block to a unified size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17739) BucketCache is inefficient/wasteful/dumb in its bucket allocations

2017-03-07 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15900583#comment-15900583
 ] 

chunhui shen commented on HBASE-17739:
--

It's a feature completed by my teammate [~allan163], he will open a new issue 
to talk about this.
Thanks, sir

> BucketCache is inefficient/wasteful/dumb in its bucket allocations
> --
>
> Key: HBASE-17739
> URL: https://issues.apache.org/jira/browse/HBASE-17739
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>
> By default we allocate 14 buckets with sizes from 5K to 513K. If lots of heap 
> given over to bucketcache and say no allocattions made for a particular 
> bucket size, this means we have a bunch of the bucketcache that just goes 
> idle/unused.
> For example, say heap is 100G. We'll divide it up among the sizes. If say we 
> only ever do 5k records, then most of the cache will go unused while the 
> allocation for 5k objects will see churn.
> Here is an old note of [~anoop.hbase]'s' from a conversation on bucket cache 
> we had offlist that describes the issue:
> "By default we have those 14 buckets with size range of 5K to 513K.
>   All sizes will have one bucket (with size 513*4) each except the
> last size.. ie. 513K sized many buckets will be there.  If we keep on
> writing only same sized blocks, we may loose all in btw sized buckets.
> Say we write only 4K sized blocks. We will 1st fill the bucket in 5K
> size. There is only one such bucket. Once this is filled, we will try
> to grab a complete free bucket from other sizes..  But we can not take
> it from 9K... 385K sized ones as there is only ONE bucket for these
> sizes.  We will take only from 513 size.. There are many in that...
> So we will eventually take all the buckets from 513 except the last
> one.. Ya it has to keep at least one in evey size.. So we will
> loose these much size.. They are of no use."
> We should set the size type on the fly as the records come in.
> Or better, we should choose record size on the fly. Here is another comment 
> from [~anoop.hbase]:
> "The second is the biggest contributor.  Suppose instead of 4K
> sized blocks, the user has 2 K sized blocks..  When we write a block to 
> bucket slot, we will reserve size equal to the allocated size for that block.
> So when we write 2K sized blocks (may be actual size a bit more than
> 2K ) we will take 5K with each of the block.  So u can see that we are
> loosing ~3K with every block. Means we are loosing more than half."
> He goes on: "If am 100% sure that all my table having 2K HFile block size, I 
> need to give this config a value 3 * 1024 (Exact 2 K if I give there may be
> again problem! That is another story we need to see how we can give
> more guarantee for the block size restriction HBASE-15248)..  So here also 
> ~1K loose for every 2K.. So some thing like a 30% loose !!! :-(“"
> So, we should figure the record sizes ourselves on the fly.
> Anything less has us wasting loads of cache space, nvm inefficiences lost 
> because of how we serialize base types to cache.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15314) Allow more than one backing file in bucketcache

2017-03-07 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-15314:
-
Attachment: HBASE-15314-v7.patch

v7.patch added some annotations as comment on review board.

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch, HBASE-15314-v5.patch, 
> HBASE-15314-v6.patch, HBASE-15314-v7.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15314) Allow more than one backing file in bucketcache

2017-03-06 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-15314:
-
Attachment: HBASE-15314-v6.patch

Uploaded the latest patch (15314-v6.patch) on review board

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch, HBASE-15314-v5.patch, 
> HBASE-15314-v6.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17739) BucketCache is inefficient/wasteful/dumb in its bucket allocations

2017-03-06 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15898603#comment-15898603
 ] 

chunhui shen commented on HBASE-17739:
--

How about making the encoded blocks have similar size?
In fact, we did this to increase the ratio of bucket cache space. 
If any interesting  or aggree so , I could open a new issue to upload the patch.

> BucketCache is inefficient/wasteful/dumb in its bucket allocations
> --
>
> Key: HBASE-17739
> URL: https://issues.apache.org/jira/browse/HBASE-17739
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>
> By default we allocate 14 buckets with sizes from 5K to 513K. If lots of heap 
> given over to bucketcache and say no allocattions made for a particular 
> bucket size, this means we have a bunch of the bucketcache that just goes 
> idle/unused.
> For example, say heap is 100G. We'll divide it up among the sizes. If say we 
> only ever do 5k records, then most of the cache will go unused while the 
> allocation for 5k objects will see churn.
> Here is an old note of [~anoop.hbase]'s' from a conversation on bucket cache 
> we had offlist that describes the issue:
> "By default we have those 14 buckets with size range of 5K to 513K.
>   All sizes will have one bucket (with size 513*4) each except the
> last size.. ie. 513K sized many buckets will be there.  If we keep on
> writing only same sized blocks, we may loose all in btw sized buckets.
> Say we write only 4K sized blocks. We will 1st fill the bucket in 5K
> size. There is only one such bucket. Once this is filled, we will try
> to grab a complete free bucket from other sizes..  But we can not take
> it from 9K... 385K sized ones as there is only ONE bucket for these
> sizes.  We will take only from 513 size.. There are many in that...
> So we will eventually take all the buckets from 513 except the last
> one.. Ya it has to keep at least one in evey size.. So we will
> loose these much size.. They are of no use."
> We should set the size type on the fly as the records come in.
> Or better, we should choose record size on the fly. Here is another comment 
> from [~anoop.hbase]:
> "The second is the biggest contributor.  Suppose instead of 4K
> sized blocks, the user has 2 K sized blocks..  When we write a block to 
> bucket slot, we will reserve size equal to the allocated size for that block.
> So when we write 2K sized blocks (may be actual size a bit more than
> 2K ) we will take 5K with each of the block.  So u can see that we are
> loosing ~3K with every block. Means we are loosing more than half."
> He goes on: "If am 100% sure that all my table having 2K HFile block size, I 
> need to give this config a value 3 * 1024 (Exact 2 K if I give there may be
> again problem! That is another story we need to see how we can give
> more guarantee for the block size restriction HBASE-15248)..  So here also 
> ~1K loose for every 2K.. So some thing like a 30% loose !!! :-(“"
> So, we should figure the record sizes ourselves on the fly.
> Anything less has us wasting loads of cache space, nvm inefficiences lost 
> because of how we serialize base types to cache.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-02-25 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15884497#comment-15884497
 ] 

chunhui shen commented on HBASE-15314:
--

{quote}
Shouldn't sizePerFile be adjusted since total space is lower than sizePerFile ?
When total space is lower than sizePerFile, the above would raise exception, 
right ?
{quote}
Yes, if the space is less than sizePerFile,  FileIOEngine will throw exception 
when initializing,  it's unnecessary to adjust the sizePerFile.

{quote}
For getFileNum(long offset), RuntimeException may be thrown. Should it declare 
to throw IOE ?
{quote}
It won't be thrown as expected, RuntimeException  seems ok.


{quote}
+   * Get the absolute offset in given file with the relative global offset.
What does "relative global" mean ?
{quote}
The FileIOEngine provide a logic contiguous storage  with several physical 
files,  the global offset is oriented to the logic storage.

Thanks Ted, Other comments are applied in patch-v5.

Move to RB https://reviews.apache.org/r/57068/ for more review.


> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch, HBASE-15314-v5.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15314) Allow more than one backing file in bucketcache

2017-02-25 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-15314:
-
Attachment: HBASE-15314-v5.patch

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch, HBASE-15314-v5.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-15314) Allow more than one backing file in bucketcache

2017-02-25 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15884201#comment-15884201
 ] 

chunhui shen edited comment on HBASE-15314 at 2/25/17 11:31 AM:


[~ram_krish]
With the uploaded v4 patch,  we could config the single file or multi-files as 
the following
{quote}
 hbase.bucketcache.ioengine 
 file:/mnt/disk1/bucketcache

 hbase.bucketcache.ioengine 
 
file:/mnt/disk1/bucketcache,/mnt/disk2/bucketcache,/mnt/disk3/bucketcache
{quote}

Could you take a look?  Thanks.


was (Author: zjushch):
[~ram_krish]
With the uploaded v4 patch,  we could config the single file or multi-files as 
the following
{quote}
 hbase.bucketcache.ioengine 
 file:/mnt/disk1/bucketcache

 hbase.bucketcache.ioengine 
 
file:/mnt/disk1/bucketcache,/mnt/disk2/bucketcache,/mnt/disk3/bucketcache
{quote}

Thanks.

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-02-25 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15884201#comment-15884201
 ] 

chunhui shen commented on HBASE-15314:
--

[~ram_krish]
With the uploaded v4 patch,  we could config the single file or multi-files as 
the following
{quote}
 hbase.bucketcache.ioengine 
 file:/mnt/disk1/bucketcache

 hbase.bucketcache.ioengine 
 
file:/mnt/disk1/bucketcache,/mnt/disk2/bucketcache,/mnt/disk3/bucketcache
{quote}

Thanks.

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15314) Allow more than one backing file in bucketcache

2017-02-25 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-15314:
-
Attachment: HBASE-15314-v4.patch

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch, HBASE-15314-v4.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-02-23 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15881836#comment-15881836
 ] 

chunhui shen commented on HBASE-15314:
--

bq. How would you handle the case of striping
Read the block from two files if crossed, just like the handling of 
ByteBufferIOEngine which would read block from multiple ByteBuffer.
This logic is implemented  by FileIOEngine#accessFile from the attachment 
'FileIOEngine.java'. 

Thanks

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-02-02 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850958#comment-15850958
 ] 

chunhui shen commented on HBASE-15314:
--

Sorry, I still haven't got the benefit point if  blocks are guaranteed to 
reside in a single segment.

With either of the two approaches, I think the BucketCache can still continue 
operating if device failure.

In fact, the attachment 'FileIOEngine.java ' also includes the logic of 
tolerating device failure.

bq.This does not happen in the patch I submitted, but it could be an 
improvement that you wouldn't be able to easily handle if HBase blocks crossed 
segments.
Thus, I don't t agree that it wouldn't be able to easily handle if HBase blocks 
crossed segments.

At last, I still think the BucketCache shouldn't be aware of multi-segments, 
the FileIOEngine should provide a contiguous storage with several physical 
devices,  just like RAID. 



> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-01-24 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15837098#comment-15837098
 ] 

chunhui shen commented on HBASE-15314:
--

IMPO, make things simple if no obvious benefit,  I prefer to use the thought of 
1st patch.

Thanks.

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-01-24 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15837087#comment-15837087
 ] 

chunhui shen commented on HBASE-15314:
--

bq.but at least we can guarantee that every allocation will reside in its own 
file
I also have considered this scene, but seems not meaningful :
1. It will take an extra IO if  an allocation cross files, but the probability 
is about 32KB/320GB ≈ 1/10M ( example for block size 32KB, one file capacity 
320GB).  So,  I think it's no effect for perfomance
2. With crossing files allocation, no extra logic is needed if a single file 
fails due to an IO error.  We will free the whole alloction if failed ( see 
this logic from BucketCahe#writeToCache).

In addition,  it seems introducing an odd stuff which is not understandable 
easily for user , from the release notes:
{quote}
The first block is wasted (it is marked allocated). The worst case is 1 
'largest block' per file. If an allocation fails for any reason, all/any 
allocated blocks (including wasted ones) are freed again for subsequent 
allocation requests. This is very similar to a 'JBOD' configuration (there is 
no striping of any kind).
{quote}


> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2017-01-17 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15827327#comment-15827327
 ] 

chunhui shen commented on HBASE-16630:
--

[~dvdreddy]
The method of freeEntireBuckets should return the freed bytes, then add it to 
the variable 'bytesFreed' of BucketCache#freeSpace.

I argee that the new introduced method BucketCache#freeEntireBuckets will help 
to free space for fragmentation scene and no effect on no-fragmentation scene. 
So, patch is fine to me, thanks

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
>Priority: Critical
> Attachments: 16630-v2-suggest.patch, 16630-v3-suggest.patch, 
> HBASE-16630.patch, HBASE-16630-v2.patch, HBASE-16630-v3.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2017-01-17 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15827291#comment-15827291
 ] 

chunhui shen commented on HBASE-16630:
--

bq. If its harmless, why commit it (smile)?
I mean it's harmless for no-fragmentation scene after applying this patch.  :)

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
>Priority: Critical
> Attachments: 16630-v2-suggest.patch, 16630-v3-suggest.patch, 
> HBASE-16630.patch, HBASE-16630-v2.patch, HBASE-16630-v3.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2017-01-16 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15825595#comment-15825595
 ] 

chunhui shen commented on HBASE-16630:
--

[~stack]
Our change about this  which is mentioned in the previous comment has been 
included in the last patch.
The patch is fine and would be harmless.

Thanks sir.

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
>Priority: Critical
> Attachments: 16630-v2-suggest.patch, 16630-v3-suggest.patch, 
> HBASE-16630.patch, HBASE-16630-v2.patch, HBASE-16630-v3.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-01-15 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15823572#comment-15823572
 ] 

chunhui shen commented on HBASE-15314:
--

bq. So this is similar to what the 1st patches in this jira doing

Yes, but the 1st patch doesn't handle the case which read/write data cross 
multiple files.  
The function of FileIOEngine#accessFile could be copied to the patch from the 
attachment 'FileIOEngine.java'

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15314) Allow more than one backing file in bucketcache

2017-01-15 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-15314:
-
Attachment: FileIOEngine.java

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-01-15 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15823401#comment-15823401
 ] 

chunhui shen commented on HBASE-15314:
--

Sorry for the late response,
but I think it's not a good choice that adding the notion of 'segmentation', 
it will make things complex and is difficult to understand about allocating 
block for newer.

Each IOEngine should provide a logic contiguous storage even if using several 
physical  IO devices, so the design and implementation  will become simple.

Just my thought, FYI
Thanks

PS:
The uploaded FileIOEngine.java support multi-files storage which is used in our 
internal branch. 
Less than 50 lines are needed to add this feature.

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2017-01-15 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15823377#comment-15823377
 ] 

chunhui shen commented on HBASE-16630:
--

seems fine for the latest patch, +1

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
>Priority: Critical
> Attachments: 16630-v2-suggest.patch, 16630-v3-suggest.patch, 
> HBASE-16630.patch, HBASE-16630-v2.patch, HBASE-16630-v3.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF

2016-11-09 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15650444#comment-15650444
 ] 

chunhui shen commented on HBASE-16993:
--

+1 for the patch, also doing like this in our internal branch.

PS:
In the origin code of BucketCache, use 48 bits to support  256TB  cache space 
(save 3 bytes (not using long) per BucketEntry )
The 48 bits includes  [BucketEntry#offsetBase](4 bytes) + 
[BucketEntry#offset1](1byte) + Reserved 8 zero bits(1byte), 
the reserved 8 zero bits means the bucket size must be a multiple of 256, 
from the issue description, one of bucket size is configed to 46000, trigger 
the exception.

Anyway, it seems a unreadable and non-meaningful design.
Thanks [~liubangchen]


> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> DATA_BLOCK_ENCODING set to DIFF
> -
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HR

[jira] [Comment Edited] (HBASE-16630) Fragmentation in long running Bucket Cache

2016-09-19 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15505542#comment-15505542
 ] 

chunhui shen edited comment on HBASE-16630 at 9/20/16 4:34 AM:
---

1.Fix the issue that FreeSpace doesn't free anything (Required)
2.Defragmentation  (Optional):
 a. Would increment cache usage ratio in short time. Otherwise will take 
relative long time without defragmentation. 
 b. Resource overhead (Byte Copy). Should limit the rate. For example, only 
trigger one time per hour.
 c. Add unit test to ensure the correctness with defragmentation.


Just my thought.


was (Author: zjushch):
1.Fix the issue that FreeSpace doesn't free anything (Required)
2.Defragmentation  (Optional):
 a. Would increment cache usage ratio in short time. Otherwise will take 
relative long time without defragmentation. 
 b. Resource overhead (Byte Copy). Should limit the rate. For example, only 
trigger one time per hour.
 c. Add unit test to ensure the correctness with defragmentation.


Just my thought.





optional

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-16630.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2016-09-19 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15505542#comment-15505542
 ] 

chunhui shen commented on HBASE-16630:
--

1.Fix the issue that FreeSpace doesn't free anything (Required)
2.Defragmentation  (Optional):
 a. Would increment cache usage ratio in short time. Otherwise will take 
relative long time without defragmentation. 
 b. Resource overhead (Byte Copy). Should limit the rate. For example, only 
trigger one time per hour.
 c. Add unit test to ensure the correctness with defragmentation.


Just my thought.





optional

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-16630.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2016-09-17 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15500076#comment-15500076
 ] 

chunhui shen commented on HBASE-16630:
--

Another thing to do : 
In one BucketCache#FreeSpace round, make sure all BucketSizeInfos have free 
count. 

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-16630.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2016-09-17 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15500060#comment-15500060
 ] 

chunhui shen commented on HBASE-16630:
--

{code}
BucketCache#freeSpace
   if (needFreeForExtra) {
 bucketQueue.clear();
-remainingBuckets = 2;
+remainingBuckets = 3;
 
 bucketQueue.add(bucketSingle);
 bucketQueue.add(bucketMulti);
+bucketQueue.add(bucketMemory);
{code}
Seems this issue mentioned in description could be solved  by above code.
Just FYI


> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-16630.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14624) BucketCache.freeBlock is too expensive

2015-10-23 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14970567#comment-14970567
 ] 

chunhui shen commented on HBASE-14624:
--

{code}
- b = freeBuckets.get(freeBuckets.size() - 1);
+ b = freeBuckets.iterator().next();
{code}
In the context,  the first entry and the last entry  are both ok.


> BucketCache.freeBlock is too expensive
> --
>
> Key: HBASE-14624
> URL: https://issues.apache.org/jira/browse/HBASE-14624
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.0.0
>Reporter: Randy Fox
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: 14624-v1.txt, 14624-v2.txt
>
>
> Moving regions is unacceptably slow when using bucket cache, as it takes too 
> long to free all the blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14624) BucketCache.freeBlock is too expensive

2015-10-21 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968599#comment-14968599
 ] 

chunhui shen commented on HBASE-14624:
--

What 's the size of region ?
You could set EVICT_BLOCKS_ON_CLOSE to false, thus freeBlocks won't be called 
when moving region.

About code improvement:
final class BucketSizeInfo {
private List bucketList, freeBuckets, completelyFreeBuckets;

The above Lists shoule be changed as HashSet  to improve performance, since 
contains and removes operation is expensive for List


> BucketCache.freeBlock is too expensive
> --
>
> Key: HBASE-14624
> URL: https://issues.apache.org/jira/browse/HBASE-14624
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 1.0.0
>Reporter: Randy Fox
>
> Moving regions is unacceptably slow when using bucket cache, as it takes too 
> long to free all the blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14497) Reverse Scan threw StackOverflow caused by readPt checking

2015-09-29 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14936206#comment-14936206
 ] 

chunhui shen commented on HBASE-14497:
--

+1 on patch v6

> Reverse Scan threw StackOverflow caused by readPt checking
> --
>
> Key: HBASE-14497
> URL: https://issues.apache.org/jira/browse/HBASE-14497
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 0.98.14, 1.3.0
>Reporter: Yerui Sun
>Assignee: Yerui Sun
> Fix For: 2.0.0
>
> Attachments: 14497-master-v6.patch, HBASE-14497-0.98.patch, 
> HBASE-14497-branch-1-v2.patch, HBASE-14497-branch-1-v3.patch, 
> HBASE-14497-branch-1.patch, HBASE-14497-master-v2.patch, 
> HBASE-14497-master-v3.patch, HBASE-14497-master-v3.patch, 
> HBASE-14497-master-v4.patch, HBASE-14497-master-v5.patch, 
> HBASE-14497-master.patch
>
>
> I met stack overflow error in StoreFileScanner.seekToPreviousRow using 
> reversed scan. I searched and founded HBASE-14155, but it seems to be a 
> different reason.
> The seekToPreviousRow will fetch the row which closest before, and compare 
> mvcc to the readPt, which acquired when scanner created. If the row's mvcc is 
> bigger than readPt, an recursive call of seekToPreviousRow will invoked, to 
> find the next closest before row.
> Considering we created a scanner for reversed scan, and some data with 
> smaller rows was written and flushed, before calling scanner next. When 
> seekToPreviousRow was invoked, it would call itself recursively, until all 
> rows which written after scanner created were iterated. The depth of 
> recursive calling stack depends on the count of rows, the stack overflow 
> error will be threw if the count of rows is large, like 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14497) Reverse Scan threw StackOverflow caused by readPt checking

2015-09-28 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14934483#comment-14934483
 ] 

chunhui shen commented on HBASE-14497:
--

Good catch. [~sunyerui]
MemstoreScanner should have the same problem, could you fix it ?
The current solution in patch is good for me.

Thanks

> Reverse Scan threw StackOverflow caused by readPt checking
> --
>
> Key: HBASE-14497
> URL: https://issues.apache.org/jira/browse/HBASE-14497
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 0.98.14, 1.3.0
>Reporter: Yerui Sun
> Attachments: HBASE-14497-0.98.patch, HBASE-14497-branch-1-v2.patch, 
> HBASE-14497-branch-1.patch, HBASE-14497-master-v2.patch, 
> HBASE-14497-master-v3.patch, HBASE-14497-master-v3.patch, 
> HBASE-14497-master.patch
>
>
> I met stack overflow error in StoreFileScanner.seekToPreviousRow using 
> reversed scan. I searched and founded HBASE-14155, but it seems to be a 
> different reason.
> The seekToPreviousRow will fetch the row which closest before, and compare 
> mvcc to the readPt, which acquired when scanner created. If the row's mvcc is 
> bigger than readPt, an recursive call of seekToPreviousRow will invoked, to 
> find the next closest before row.
> Considering we created a scanner for reversed scan, and some data with 
> smaller rows was written and flushed, before calling scanner next. When 
> seekToPreviousRow was invoked, it would call itself recursively, until all 
> rows which written after scanner created were iterated. The depth of 
> recursive calling stack depends on the count of rows, the stack overflow 
> error will be threw if the count of rows is large, like 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14306) Refine RegionGroupingProvider: fix issues and make it more scalable

2015-09-14 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743079#comment-14743079
 ] 

chunhui shen commented on HBASE-14306:
--

+1 for patch v5

> Refine RegionGroupingProvider: fix issues and make it more scalable
> ---
>
> Key: HBASE-14306
> URL: https://issues.apache.org/jira/browse/HBASE-14306
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0, 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14306.patch, HBASE-14306_v2.patch, 
> HBASE-14306_v3.patch, HBASE-14306_v3.patch, HBASE-14306_v4.patch, 
> HBASE-14306_v5.patch
>
>
> There're multiple issues in RegionGroupingProvider, including:
> * The provider cache in it is using byte array as the key of 
> ConcurrentHashMap, which is not right (the reason is 
> [here|http://stackoverflow.com/questions/1058149/using-a-byte-array-as-hashmap-key-java])
> * It's using IdentityGroupingStrategy to get group and use it as key of the 
> cache, which means the cache will include an entry for each region. This is 
> especially unnecessary when using BoundedRegionGroupingProvider
> Besides fixing the above issues, I suggest to change 
> BoundedRegionGroupingProvider from a *provider* to a pluggable *strategy*, 
> which will make the whole picture much more clear.
> For more details, please refer to the patch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6617) ReplicationSourceManager should be able to track multiple WAL paths

2015-09-08 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14736178#comment-14736178
 ] 

chunhui shen commented on HBASE-6617:
-

+1 on patch v12.

> ReplicationSourceManager should be able to track multiple WAL paths
> ---
>
> Key: HBASE-6617
> URL: https://issues.apache.org/jira/browse/HBASE-6617
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Ted Yu
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 6617-v11.patch, HBASE-6617.branch-1.patch, 
> HBASE-6617.branch-1.v2.patch, HBASE-6617.patch, HBASE-6617_v10.patch, 
> HBASE-6617_v11.patch, HBASE-6617_v12.patch, HBASE-6617_v2.patch, 
> HBASE-6617_v3.patch, HBASE-6617_v4.patch, HBASE-6617_v7.patch, 
> HBASE-6617_v9.patch
>
>
> Currently ReplicationSourceManager uses logRolled() to receive notification 
> about new HLog and remembers it in latestPath.
> When region server has multiple WAL support, we need to keep track of 
> multiple Path's in ReplicationSourceManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6617) ReplicationSourceManager should be able to track multiple WAL paths

2015-09-05 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14732164#comment-14732164
 ] 

chunhui shen commented on HBASE-6617:
-

Seems good to me on patch-v11

> ReplicationSourceManager should be able to track multiple WAL paths
> ---
>
> Key: HBASE-6617
> URL: https://issues.apache.org/jira/browse/HBASE-6617
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Ted Yu
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 6617-v11.patch, HBASE-6617.patch, HBASE-6617_v10.patch, 
> HBASE-6617_v11.patch, HBASE-6617_v2.patch, HBASE-6617_v3.patch, 
> HBASE-6617_v4.patch, HBASE-6617_v7.patch, HBASE-6617_v9.patch
>
>
> Currently ReplicationSourceManager uses logRolled() to receive notification 
> about new HLog and remembers it in latestPath.
> When region server has multiple WAL support, we need to keep track of 
> multiple Path's in ReplicationSourceManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6617) ReplicationSourceManager should be able to track multiple WAL paths

2015-08-25 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712370#comment-14712370
 ] 

chunhui shen commented on HBASE-6617:
-

Source Metrics is another problem I think.
FYI

> ReplicationSourceManager should be able to track multiple WAL paths
> ---
>
> Key: HBASE-6617
> URL: https://issues.apache.org/jira/browse/HBASE-6617
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Ted Yu
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-6617.patch, HBASE-6617_v2.patch, 
> HBASE-6617_v3.patch
>
>
> Currently ReplicationSourceManager uses logRolled() to receive notification 
> about new HLog and remembers it in latestPath.
> When region server has multiple WAL support, we need to keep track of 
> multiple Path's in ReplicationSourceManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6617) ReplicationSourceManager should be able to track multiple WAL paths

2015-08-25 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712364#comment-14712364
 ] 

chunhui shen commented on HBASE-6617:
-

Take a quick look about the patch.
As my thought, we should keep the function of ReplicationSource unchanged which 
replicate the logs to one peer cluster. 
With the patch,  we will have many replication source objects if many wal group 
exits.  And I only found creating ReplicationSource when new wal group used, 
but no deleting. After a long running, as the creating and deleting of wal 
group,  ReplicationSource seems the trouble of server.

Thus, I suggest track multi logs for one replication source, rather than multi 
replication source for multi logs .
Correct me if something wrong.
Thanks

> ReplicationSourceManager should be able to track multiple WAL paths
> ---
>
> Key: HBASE-6617
> URL: https://issues.apache.org/jira/browse/HBASE-6617
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Ted Yu
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-6617.patch, HBASE-6617_v2.patch, 
> HBASE-6617_v3.patch
>
>
> Currently ReplicationSourceManager uses logRolled() to receive notification 
> about new HLog and remembers it in latestPath.
> When region server has multiple WAL support, we need to keep track of 
> multiple Path's in ReplicationSourceManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12915) Disallow small scan with batching

2015-01-25 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14291386#comment-14291386
 ] 

chunhui shen commented on HBASE-12915:
--

Current ClientSmallScanner is not compatible with batch argument.
Patch lgtm, +1

> Disallow small scan with batching
> -
>
> Key: HBASE-12915
> URL: https://issues.apache.org/jira/browse/HBASE-12915
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 12915-001.txt
>
>
> If user sets batching in Scan object, ClientSmallScanner may return 
> unexpected result because data from same row may appear in multiple Result 
> objects but ClientSmallScanner considers different Results to correspond to 
> different rows.
> Small scan with batching should be disallowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11295) Long running scan produces OutOfOrderScannerNextException

2015-01-09 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14270811#comment-14270811
 ] 

chunhui shen commented on HBASE-11295:
--

bq.It does seem reasonable to have OOSNE retry the same number of configured 
times as other retryable IOExceptions.

In my person opinion, OOSNE is caused by RPC timeout of long run scan,  if scan 
run time > RPC timeout,  increment retry number won't get anything. 

But, using the same retry mechanism seems reasonable, so, I am +1 for this.


> Long running scan produces OutOfOrderScannerNextException
> -
>
> Key: HBASE-11295
> URL: https://issues.apache.org/jira/browse/HBASE-11295
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.96.0
>Reporter: Jeff Cunningham
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: OutOfOrderScannerNextException.tar.gz
>
>
> Attached Files:
> HRegionServer.java - instramented from 0.96.1.1-cdh5.0.0
> HBaseLeaseTimeoutIT.java - reproducing JUnit 4 test
> WaitFilter.java - Scan filter (extends FilterBase) that overrides 
> filterRowKey() to sleep during invocation
> SpliceFilter.proto - Protobuf defintiion for WaitFilter.java
> OutOfOrderScann_InstramentedServer.log - instramented server log
> Steps.txt - this note
> Set up:
> In HBaseLeaseTimeoutIT, create a scan, set the given filter (which sleeps in 
> overridden filterRowKey() method) and set it on the scan, and scan the table.
> This is done in test client_0x0_server_15x10().
> Here's what I'm seeing (see also attached log):
> A new request comes into server (ID 1940798815214593802 - 
> RpcServer.handler=96) and a RegionScanner is created for it, cached by ID, 
> immediately looked up again and cached RegionScannerHolder's nextCallSeq 
> incremeted (now at 1).
> The RegionScan thread goes to sleep in WaitFilter#filterRowKey().
> A short (variable) period later, another request comes into the server (ID 
> 8946109289649235722 - RpcServer.handler=98) and the same series of events 
> happen to this request.
> At this point both RegionScanner threads are sleeping in 
> WaitFilter.filterRowKey(). After another period, the client retries another 
> scan request which thinks its next_call_seq is 0.  However, HRegionServer's 
> cached RegionScannerHolder thinks the matching RegionScanner's nextCallSeq 
> should be 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12827) set rowOffsetPerColumnFamily on ClientSmallScanner if lastResult is not null.

2015-01-08 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14270573#comment-14270573
 ] 

chunhui shen commented on HBASE-12827:
--

ClientSmallScanner may return unexpected result if scan setting batch.(The same 
row data may happen in multiple Result objects if scan.getBatch > 0, but 
ClientSmallScanner consider different Results have different rows)

The above patch also couldn't fix this problem since it only fix the case where 
the scan results all belongs to one Row.

A easy solution is using ClientScanner rather than ClientSmallScanner if 
scan.getBatch > 0.  

> set rowOffsetPerColumnFamily on ClientSmallScanner if lastResult is not null.
> -
>
> Key: HBASE-12827
> URL: https://issues.apache.org/jira/browse/HBASE-12827
> Project: HBase
>  Issue Type: Bug
>  Components: Client, hbase, Scanners
>Reporter: Toshimasa NASU
> Attachments: HBASE-12827-v1.patch
>
>
> When you use the ClientSmallScanner, same Result has been acquired. And will 
> be infinite loop.
> Cause to occur if you iterations beyond the (batch size * caching size) of 
> Scan.
> Solution I think would be to correctly set the rowOffsetPerColumnFamily.
> I can be resolved by the following patch work.
> https://github.com/toshimasa-nasu/hbase/commit/2c35914624d3494c79114926d35fc886c9a235ec
> {code}
>// When fetching results from server, skip the first result if it has the 
> same
>// row with this one
>private byte[] skipRowOfFirstResult = null;
> +  private boolean alreadyGetRowOfFirstResult = false;
> +  private int nextRowOffsetPerColumnFamily = 0;
>  
>/**
> * Create a new ClientSmallScanner for the specified table. An HConnection
>  @@ -142,10 +144,19 @@ private boolean nextScanner(int nbRows, final boolean 
> done,
>  LOG.debug("Finished with region " + this.currentRegion);
>}
>  } else if (this.lastResult != null) {
> +  if (alreadyGetRowOfFirstResult) {
> +nextRowOffsetPerColumnFamily += (this.scan.getBatch() * 
> this.caching);
> +  } else {
> +nextRowOffsetPerColumnFamily = (this.scan.getBatch() * (this.caching 
> - 1));
> +  }
> +  this.scan.setRowOffsetPerColumnFamily(nextRowOffsetPerColumnFamily);
> +  alreadyGetRowOfFirstResult = true;
>localStartKey = this.lastResult.getRow();
>skipRowOfFirstResult = this.lastResult.getRow();
>cacheNum++;
>  } else {
> +  alreadyGetRowOfFirstResult = false;
> +  nextRowOffsetPerColumnFamily = 0;
>localStartKey = this.scan.getStartRow();
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11803) Programming model for reverse scan is confusing

2014-08-26 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14110551#comment-14110551
 ] 

chunhui shen commented on HBASE-11803:
--

Patch lgtm...

A small comment:
ExclusiveStartFilter#filterRowKey could return false directly after one 
comparison.  I mean caching the comparison result and return the cached result 
for next calls

> Programming model for reverse scan is confusing
> ---
>
> Key: HBASE-11803
> URL: https://issues.apache.org/jira/browse/HBASE-11803
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.1
>Reporter: James Taylor
>Assignee: Ted Yu
> Attachments: 11803-v1.txt
>
>
> The reverse scan is a very nice feature in HBase. We leverage it in Apache 
> Phoenix 4.1 when possible and see a huge boost in performance over 
> re-ordering the result set ourselves.
> However, the way in which you have to adjust the start/stop key is confusing. 
> Our use case is that we have a scan that needs to be done and we've 
> calculated an inclusive start row and an exclusive stop row. This is the way 
> region boundaries are which is convenient as they can easily be intersected 
> against the scan stop/start row. When we use a reverse scan, we are forced to 
> switch the start and stop row values of the scan *and* adjust the byte values 
> from inclusive to exclusive and from exclusive to inclusive. The former is 
> not too bad, as you can just add a zero byte, but the latter is problematic. 
> You can decrease the last byte by one, but you need to add an indeterminate 
> 0xFF bytes to ensure you're not including a row unintentionally.
> IMHO, it would be much cleaner to just keep the start/stop row as is and just 
> set  call the Scan.setReversed(true) method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11803) Programming model for reverse scan is confusing

2014-08-22 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106657#comment-14106657
 ] 

chunhui shen commented on HBASE-11803:
--

Now, we have the InclusiveStopFilter to make the stop row inclusive.
If add a new ExclusiveStartFilter, I think your problem could be fixed.


> Programming model for reverse scan is confusing
> ---
>
> Key: HBASE-11803
> URL: https://issues.apache.org/jira/browse/HBASE-11803
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.1
>Reporter: James Taylor
>
> The reverse scan is a very nice feature in HBase. We leverage it in Apache 
> Phoenix 4.1 when possible and see a huge boost in performance over 
> re-ordering the result set ourselves.
> However, the way in which you have to adjust the start/stop key is confusing. 
> Our use case is that we have a scan that needs to be done and we've 
> calculated an inclusive start row and an exclusive stop row. This is the way 
> region boundaries are which is convenient as they can easily be intersected 
> against the scan stop/start row. When we use a reverse scan, we are forced to 
> switch the start and stop row values of the scan *and* adjust the byte values 
> from inclusive to exclusive and from exclusive to inclusive. The former is 
> not too bad, as you can just add a zero byte, but the latter is problematic. 
> You can decrease the last byte by one, but you need to add an indeterminate 
> 0xFF bytes to ensure you're not including a row unintentionally.
> IMHO, it would be much cleaner to just keep the start/stop row as is and just 
> set  call the Scan.setReversed(true) method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11678) BucketCache ramCache fills heap after running a few hours

2014-08-05 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087154#comment-14087154
 ] 

chunhui shen commented on HBASE-11678:
--

[~stack]
We also encountered this bug recently. 
The entry will not been removed from ramCache if RAMQueueEntry#writeToCache 
throws exception, it will cause OOM.

About the patch, we should remove the failed entries from ramCache since these 
entries may always be failed. e.g. the size of entries bigger than BucketCache 
supported(2MB as default, see BucketAllocator#allocateBlock).

So, we should add removing code after doDrain:
{code}
+if (LOG.isDebugEnabled() && countBefore != countAfter) {
+  LOG.debug("Failed drain all: countBefore=" + countBefore + ", 
countAfter=" +
+countAfter);
+}

+ for(RAMQueueEntry ramEntry : entries){
+ ramCache.remove(ramEntry.getKey());
+ }
{code}

> BucketCache ramCache fills heap after running a few hours
> -
>
> Key: HBASE-11678
> URL: https://issues.apache.org/jira/browse/HBASE-11678
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 0.99.0, 0.98.5, 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 
> 0001-When-we-failed-add-an-entry-failing-with-a-CacheFull.patch
>
>
> Testing BucketCache, my heap filled after running for hours. Dumping heap, 
> culprit is the ramCache Map in BucketCache.  Tried running with more writer 
> threads but made no difference.  Trying to figure now how our accounting is 
> going wonky.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11667) Simplify ClientScanner logic for NSREs.

2014-08-04 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14085727#comment-14085727
 ] 

chunhui shen commented on HBASE-11667:
--

For test case, I mean move the split action to the while block, like this:
{code}
  while (rs.next() != null) {
c++;
if (c % 4333 == 1) {
  TEST_UTIL.getHBaseAdmin().split(TABLE);
}
  }
  assertEquals(7733, c);
{code}

In our internal 0.94 branch, the test will failed with the patch:
{noformat}
java.lang.AssertionError: expected:<7733> but was:<7743>
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.hadoop.hbase.client.TestFromClientSide.testScansWithSplits(TestFromClientSide.java:5096)
{noformat}

[~lhofhansl]
Could you take the above change about test case and try a test?

> Simplify ClientScanner logic for NSREs.
> ---
>
> Key: HBASE-11667
> URL: https://issues.apache.org/jira/browse/HBASE-11667
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.99.0, 2.0.0, 0.94.23, 0.98.6
>
> Attachments: 11667-0.94.txt, 11667-trunk.txt, HBASE-11667-0.98.patch
>
>
> We ran into an issue with Phoenix where a RegionObserver coprocessor 
> intercepts a scan and returns an aggregate (in this case a count) with a fake 
> row key. It turns out this does not work when the {{ClientScanner}} 
> encounters NSREs, as it uses the last key it saw to reset the scanner to try 
> again (which in this case would be the fake key).
> While this is arguably a rare case and one could also argue that a region 
> observer just shouldn't do this... While looking at {{ClientScanner}}'s code 
> I found this logic not necessary.
> A NSRE occurred because we contacted a region server with a key that it no 
> longer hosts. This is the start key, so it is always correct to retry with 
> this same key. That simplifies the ClientScanner logic and also make this 
> sort of coprocessors possible,



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11667) Simplify ClientScanner logic for NSREs.

2014-08-04 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14085660#comment-14085660
 ] 

chunhui shen commented on HBASE-11667:
--

The logic of skipFirst  is used to handle the case that client call 'next' 
request and server returns the NSRE in the process of scanning.
I think it shouln't be removed directly.

For example, for a region contains rows: 'aaa','bbb','ccc','ddd'.
Do the following things:
1.Client open the scanner(empty start row) of this region.
2.Client call next, and get row 'aaa'
3.Move the region to another server
4.Client call next request to the old server, it will get NSRE, thus client 
will open the scanner again with 'aaa'(the last result) as the start row.
5.Client should skip the first row 'aaa' 

So, for the test case testScansWithSplits(), we should do 
"TEST_UTIL.getHBaseAdmin().split(TABLE)" in the process of scanning rather than 
after completing scanning.

Pardon me if something wrong.

> Simplify ClientScanner logic for NSREs.
> ---
>
> Key: HBASE-11667
> URL: https://issues.apache.org/jira/browse/HBASE-11667
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.99.0, 0.98.5, 2.0.0, 0.94.23
>
> Attachments: 11667-0.94.txt, 11667-trunk.txt, HBASE-11667-0.98.patch
>
>
> We ran into an issue with Phoenix where a RegionObserver coprocessor 
> intercepts a scan and returns an aggregate (in this case a count) with a fake 
> row key. It turns out this does not work when the {{ClientScanner}} 
> encounters NSREs, as it uses the last key it saw to reset the scanner to try 
> again (which in this case would be the fake key).
> While this is arguably a rare case and one could also argue that a region 
> observer just shouldn't do this... While looking at {{ClientScanner}}'s code 
> I found this logic not necessary.
> A NSRE occurred because we contacted a region server with a key that it no 
> longer hosts. This is the start key, so it is always correct to retry with 
> this same key. That simplifies the ClientScanner logic and also make this 
> sort of coprocessors possible,



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11551) BucketCache$WriterThread.run() doesn't handle exceptions correctly

2014-07-23 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071590#comment-14071590
 ] 

chunhui shen commented on HBASE-11551:
--

bq.Just now, we haven't encountered such exception.
In our clusters, we have already do the same fix with the changes several 
months ago, just think it has potential problem from code view.  

So our clusters can't show  this issue won't happen... 

> BucketCache$WriterThread.run() doesn't handle exceptions correctly
> --
>
> Key: HBASE-11551
> URL: https://issues.apache.org/jira/browse/HBASE-11551
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 11551-v1.txt
>
>
> Currently the catch is outside the while loop:
> {code}
>   try {
> while (cacheEnabled && writerEnabled) {
> ...
>   } catch (Throwable t) {
> LOG.warn("Failed doing drain", t);
>   }
> {code}
> When exception (e.g. BucketAllocatorException) is thrown, run() method would 
> terminate, silently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11551) BucketCache$WriterThread.run() doesn't handle exceptions correctly

2014-07-22 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071397#comment-14071397
 ] 

chunhui shen commented on HBASE-11551:
--

The above exception ‘Failed allocating for block bck2_0’ won't cause 
writerThread exiting.

bq. if you talked to the fellow whose filesystem just overflowed with unnoticed 
ERROR logs because of some non-self-healing issue
If the caught exception is self-healing or occasional, the changes will make 
sense.

Just now, we haven't encountered such exception. It is make-work code reading 
for me.


> BucketCache$WriterThread.run() doesn't handle exceptions correctly
> --
>
> Key: HBASE-11551
> URL: https://issues.apache.org/jira/browse/HBASE-11551
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.99.0, 2.0.0
>
> Attachments: 11551-v1.txt
>
>
> Currently the catch is outside the while loop:
> {code}
>   try {
> while (cacheEnabled && writerEnabled) {
> ...
>   } catch (Throwable t) {
> LOG.warn("Failed doing drain", t);
>   }
> {code}
> When exception (e.g. BucketAllocatorException) is thrown, run() method would 
> terminate, silently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11551) BucketCache$WriterThread.run() doesn't handle exceptions correctly

2014-07-22 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071241#comment-14071241
 ] 

chunhui shen commented on HBASE-11551:
--

“This patch adds a new inner try/catch that does nothing but catch an IOE”
Not catch an IOE,  but catch a not-expected exception. 
Anyway, writer thread shouldn't exit except the flag 'cacheEnabled' is set to 
false.

> BucketCache$WriterThread.run() doesn't handle exceptions correctly
> --
>
> Key: HBASE-11551
> URL: https://issues.apache.org/jira/browse/HBASE-11551
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.99.0, 2.0.0
>
> Attachments: 11551-v1.txt
>
>
> Currently the catch is outside the while loop:
> {code}
>   try {
> while (cacheEnabled && writerEnabled) {
> ...
>   } catch (Throwable t) {
> LOG.warn("Failed doing drain", t);
>   }
> {code}
> When exception (e.g. BucketAllocatorException) is thrown, run() method would 
> terminate, silently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11551) BucketCache$WriterThread.run() doesn't handle exceptions correctly

2014-07-22 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071235#comment-14071235
 ] 

chunhui shen commented on HBASE-11551:
--

[~stack]
If all the writer thread are dead, the bucket cache is unavailable. But IOE 
should be normal when flushing block to IOEngine. We need hold the IOE.

Thus, I think it is a bug when coding.

> BucketCache$WriterThread.run() doesn't handle exceptions correctly
> --
>
> Key: HBASE-11551
> URL: https://issues.apache.org/jira/browse/HBASE-11551
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.99.0, 2.0.0
>
> Attachments: 11551-v1.txt
>
>
> Currently the catch is outside the while loop:
> {code}
>   try {
> while (cacheEnabled && writerEnabled) {
> ...
>   } catch (Throwable t) {
> LOG.warn("Failed doing drain", t);
>   }
> {code}
> When exception (e.g. BucketAllocatorException) is thrown, run() method would 
> terminate, silently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11551) BucketCache$WriterThread.run() doesn't handle exceptions correctly

2014-07-20 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14068167#comment-14068167
 ] 

chunhui shen commented on HBASE-11551:
--

lgtm
+1

> BucketCache$WriterThread.run() doesn't handle exceptions correctly
> --
>
> Key: HBASE-11551
> URL: https://issues.apache.org/jira/browse/HBASE-11551
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 11551-v1.txt
>
>
> Currently the catch is outside the while loop:
> {code}
>   try {
> while (cacheEnabled && writerEnabled) {
> ...
>   } catch (Throwable t) {
> LOG.warn("Failed doing drain", t);
>   }
> {code}
> When exception (e.g. BucketAllocatorException) is thrown, run() method would 
> terminate, silently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11460) Deadlock in HMaster on masterAndZKLock in HConnectionManager

2014-07-07 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14054412#comment-14054412
 ] 

chunhui shen commented on HBASE-11460:
--

lgtm
+1 on patch

> Deadlock in HMaster on masterAndZKLock in HConnectionManager
> 
>
> Key: HBASE-11460
> URL: https://issues.apache.org/jira/browse/HBASE-11460
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.96.0
>Reporter: Andrey Stepachev
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.99.0
>
> Attachments: 11460-v1.txt, threads.tdump
>
>
> On one of our clusters we got a deadlock in HMaster.
> In a nutshell deadlock caused by using one HConnectionManager for serving 
> client-like calls and calls from HMaster RPC handlers.
> HBaseAdmin uses HConnectionManager which takes a lock masterAndZKLock.
> On the other side of this game sits TablesNamespaceManager (TNM). This class 
> uses HConnectionManager too (in my case for getting list of available 
> namespaces). 
> Problem is that HMaster class uses TNM  for serving RPC requests.
> If we look at TNM more closely, we can see, that this class is totally 
> synchronised.
> Thats gives us a problem.
> WebInterface calls request via HConnectionManager and locks masterAndZKLock.
> Connection is blocking, so RpcClient will spin, awaiting for reply (while 
> holding lock).
> That how it looks like in thread dump:
> {code}
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xc8905430> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$Call)
>   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1435)
>   - locked <0xc8905430> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$Call)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:40216)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(HConnectionManager.java:1467)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(HConnectionManager.java:2093)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1819)
>   - locked <0xd15dc668> (a java.lang.Object)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin$MasterCallable.prepare(HBaseAdmin.java:3187)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:119)
>   - locked <0xcd0c1238> (a 
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:96)
>   - locked <0xcd0c1238> (a 
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3214)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.listTableDescriptorsByNamespace(HBaseAdmin.java:2265)
> {code}
> Some other client call any HMaster RPC, and it calls TablesNamespaceManager 
> methods, which in turn will block on HConnectionManager global lock 
> masterAndZKLock.
> That how it looks like:
> {code}
>   java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1699)
>   - waiting to lock <0xd15dc668> (a java.lang.Object)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:100)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isTableDisabled(HConnectionManager.java:874)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:1027)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:852)
>   at 
> org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:72)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:119)
>   - locked <0xcd0ef108> (a 
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:705)
>   at 

[jira] [Commented] (HBASE-11414) Backport HBASE-7711 "rowlock release problem with thread interruptions in batchMutate" to 0.94

2014-06-26 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14045428#comment-14045428
 ] 

chunhui shen commented on HBASE-11414:
--

+1

> Backport HBASE-7711 "rowlock release problem with thread interruptions in 
> batchMutate" to 0.94
> --
>
> Key: HBASE-11414
> URL: https://issues.apache.org/jira/browse/HBASE-11414
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.94.21
>
> Attachments: 11414-v1.txt
>
>
> HBASE-7711 fixed the issue where rowlock may not be released when 
> regionserver operations are interrupted.
> This fix is not in 0.94
> Backport the fix to 0.94



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11355) a couple of callQueue related improvements

2014-06-15 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14032097#comment-14032097
 ] 

chunhui shen commented on HBASE-11355:
--

220k qps seems a great performance. Waiting for the patch:)

Agreeing with the other mentioned stuff, we also seperated the read and write 
request and make the fail-fast response after queue is full.


Make these things into different  issues?

> a couple of callQueue related improvements
> --
>
> Key: HBASE-11355
> URL: https://issues.apache.org/jira/browse/HBASE-11355
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 0.99.0, 0.94.20
>Reporter: Liang Xie
>Assignee: Liang Xie
>
> In one of my in-memory read only testing(100% get requests), one of the top 
> scalibility bottleneck came from the single callQueue. A tentative sharing 
> this callQueue according to the rpc handler number showed a big throughput 
> improvement(the original get() qps is around 60k, after this one and other 
> hotspot tunning, i got 220k get() qps in the same single region server) in a 
> YCSB read only scenario.
> Another stuff we can do is seperating the queue into read call queue and 
> write call queue, we had done it in our internal branch, it would helpful in 
> some outages, to avoid all read or all write requests ran out of all handler 
> threads.
> One more stuff is changing the current blocking behevior once the callQueue 
> is full, considering the full callQueue almost means the backend processing 
> is slow somehow, so a fail-fast here should be more reasonable if we using 
> HBase as a low latency processing system. see "callQueue.put(call)"



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11107) Provide utility method equivalent to 0.92's Result.getBytes().getSize()

2014-06-12 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14030155#comment-14030155
 ] 

chunhui shen commented on HBASE-11107:
--

lgtm, +1

> Provide utility method equivalent to 0.92's Result.getBytes().getSize()
> ---
>
> Key: HBASE-11107
> URL: https://issues.apache.org/jira/browse/HBASE-11107
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Gustavo Anatoly
>Priority: Trivial
> Attachments: HBASE-11107.patch
>
>
> Currently user has to write code similar to the following for replacement of 
> Result.getBytes().getSize() :
> {code}
> +Cell[] cellValues = resultRow.rawCells();
> +
> +long size = 0L;
> +if (null != cellValues) {
> +  for (Cell cellValue : cellValues) {
> +size += KeyValueUtil.ensureKeyValue(cellValue).heapSize();
> +  } 
> +}
> {code}
> In ClientScanner, we have:
> {code}
>   for (Cell kv : rs.rawCells()) {
> // TODO make method in Cell or CellUtil
> remainingResultSize -= 
> KeyValueUtil.ensureKeyValue(kv).heapSize();
>   }
> {code}
> A utility method should be provided which computes summation of Cell sizes in 
> a Result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-26 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14009198#comment-14009198
 ] 

chunhui shen commented on HBASE-11234:
--

+1 on patch

> FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
> 
>
> Key: HBASE-11234
> URL: https://issues.apache.org/jira/browse/HBASE-11234
> Project: HBase
>  Issue Type: Bug
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.99.0, 0.96.3, 0.94.21, 0.98.4
>
> Attachments: 11234-94.patch, 11234-96.patch, 
> 11234-98-with-prefix-tree.txt, 11234-trunk.addendum, HBASE-11234.patch
>
>
> As Ted found, 
> With this change:
> {noformat}
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
> ===
> --- 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(revision 1596579)
> +++ 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(working copy)
> @@ -51,6 +51,7 @@
>  import org.apache.hadoop.hbase.filter.FilterList.Operator;
>  import org.apache.hadoop.hbase.filter.PageFilter;
>  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
> +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
>  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
>  import org.apache.hadoop.hbase.io.hfile.HFileContext;
>  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
> @@ -90,6 +91,7 @@
>  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
>  HFileContextBuilder hcBuilder = new HFileContextBuilder();
>  hcBuilder.withBlockSize(2 * 1024);
> +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
>  HFileContext hFileContext = hcBuilder.build();
>  StoreFile.Writer writer = new StoreFile.WriterBuilder(
>  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
> {noformat}
> I got:
> java.lang.AssertionError: 
> expected: 
> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
> After debugging, it seems the method of 
> FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
> hfilescanner#seekBefore returns wrong result.
> The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-11234:
-

Attachment: 11234-96.patch
11234-94.patch

> FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
> 
>
> Key: HBASE-11234
> URL: https://issues.apache.org/jira/browse/HBASE-11234
> Project: HBase
>  Issue Type: Bug
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.99.0, 0.96.3, 0.94.20, 0.98.3
>
> Attachments: 11234-94.patch, 11234-96.patch, HBASE-11234.patch
>
>
> As Ted found, 
> With this change:
> {noformat}
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
> ===
> --- 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(revision 1596579)
> +++ 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(working copy)
> @@ -51,6 +51,7 @@
>  import org.apache.hadoop.hbase.filter.FilterList.Operator;
>  import org.apache.hadoop.hbase.filter.PageFilter;
>  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
> +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
>  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
>  import org.apache.hadoop.hbase.io.hfile.HFileContext;
>  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
> @@ -90,6 +91,7 @@
>  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
>  HFileContextBuilder hcBuilder = new HFileContextBuilder();
>  hcBuilder.withBlockSize(2 * 1024);
> +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
>  HFileContext hFileContext = hcBuilder.build();
>  StoreFile.Writer writer = new StoreFile.WriterBuilder(
>  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
> {noformat}
> I got:
> java.lang.AssertionError: 
> expected: 
> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
> After debugging, it seems the method of 
> FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
> hfilescanner#seekBefore returns wrong result.
> The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-11234:
-

Fix Version/s: 0.94.21
   0.96.3

> FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
> 
>
> Key: HBASE-11234
> URL: https://issues.apache.org/jira/browse/HBASE-11234
> Project: HBase
>  Issue Type: Bug
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.99.0, 0.96.3, 0.98.3, 0.94.21
>
> Attachments: HBASE-11234.patch
>
>
> As Ted found, 
> With this change:
> {noformat}
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
> ===
> --- 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(revision 1596579)
> +++ 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(working copy)
> @@ -51,6 +51,7 @@
>  import org.apache.hadoop.hbase.filter.FilterList.Operator;
>  import org.apache.hadoop.hbase.filter.PageFilter;
>  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
> +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
>  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
>  import org.apache.hadoop.hbase.io.hfile.HFileContext;
>  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
> @@ -90,6 +91,7 @@
>  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
>  HFileContextBuilder hcBuilder = new HFileContextBuilder();
>  hcBuilder.withBlockSize(2 * 1024);
> +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
>  HFileContext hFileContext = hcBuilder.build();
>  StoreFile.Writer writer = new StoreFile.WriterBuilder(
>  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
> {noformat}
> I got:
> java.lang.AssertionError: 
> expected: 
> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
> After debugging, it seems the method of 
> FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
> hfilescanner#seekBefore returns wrong result.
> The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14006926#comment-14006926
 ] 

chunhui shen commented on HBASE-11234:
--

Committed to 0.96 and 0.94 also

> FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
> 
>
> Key: HBASE-11234
> URL: https://issues.apache.org/jira/browse/HBASE-11234
> Project: HBase
>  Issue Type: Bug
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.99.0, 0.96.3, 0.98.3, 0.94.21
>
> Attachments: HBASE-11234.patch
>
>
> As Ted found, 
> With this change:
> {noformat}
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
> ===
> --- 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(revision 1596579)
> +++ 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(working copy)
> @@ -51,6 +51,7 @@
>  import org.apache.hadoop.hbase.filter.FilterList.Operator;
>  import org.apache.hadoop.hbase.filter.PageFilter;
>  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
> +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
>  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
>  import org.apache.hadoop.hbase.io.hfile.HFileContext;
>  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
> @@ -90,6 +91,7 @@
>  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
>  HFileContextBuilder hcBuilder = new HFileContextBuilder();
>  hcBuilder.withBlockSize(2 * 1024);
> +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
>  HFileContext hFileContext = hcBuilder.build();
>  StoreFile.Writer writer = new StoreFile.WriterBuilder(
>  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
> {noformat}
> I got:
> java.lang.AssertionError: 
> expected: 
> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
> After debugging, it seems the method of 
> FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
> hfilescanner#seekBefore returns wrong result.
> The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14006840#comment-14006840
 ] 

chunhui shen commented on HBASE-11234:
--

Committed to trunk and 0.98

> FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
> 
>
> Key: HBASE-11234
> URL: https://issues.apache.org/jira/browse/HBASE-11234
> Project: HBase
>  Issue Type: Bug
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.99.0, 0.98.3
>
> Attachments: HBASE-11234.patch
>
>
> As Ted found, 
> With this change:
> {noformat}
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
> ===
> --- 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(revision 1596579)
> +++ 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(working copy)
> @@ -51,6 +51,7 @@
>  import org.apache.hadoop.hbase.filter.FilterList.Operator;
>  import org.apache.hadoop.hbase.filter.PageFilter;
>  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
> +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
>  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
>  import org.apache.hadoop.hbase.io.hfile.HFileContext;
>  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
> @@ -90,6 +91,7 @@
>  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
>  HFileContextBuilder hcBuilder = new HFileContextBuilder();
>  hcBuilder.withBlockSize(2 * 1024);
> +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
>  HFileContext hFileContext = hcBuilder.build();
>  StoreFile.Writer writer = new StoreFile.WriterBuilder(
>  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
> {noformat}
> I got:
> java.lang.AssertionError: 
> expected: 
> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
> After debugging, it seems the method of 
> FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
> hfilescanner#seekBefore returns wrong result.
> The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-11234:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
> 
>
> Key: HBASE-11234
> URL: https://issues.apache.org/jira/browse/HBASE-11234
> Project: HBase
>  Issue Type: Bug
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.99.0, 0.98.3
>
> Attachments: HBASE-11234.patch
>
>
> As Ted found, 
> With this change:
> {noformat}
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
> ===
> --- 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(revision 1596579)
> +++ 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(working copy)
> @@ -51,6 +51,7 @@
>  import org.apache.hadoop.hbase.filter.FilterList.Operator;
>  import org.apache.hadoop.hbase.filter.PageFilter;
>  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
> +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
>  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
>  import org.apache.hadoop.hbase.io.hfile.HFileContext;
>  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
> @@ -90,6 +91,7 @@
>  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
>  HFileContextBuilder hcBuilder = new HFileContextBuilder();
>  hcBuilder.withBlockSize(2 * 1024);
> +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
>  HFileContext hFileContext = hcBuilder.build();
>  StoreFile.Writer writer = new StoreFile.WriterBuilder(
>  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
> {noformat}
> I got:
> java.lang.AssertionError: 
> expected: 
> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
> After debugging, it seems the method of 
> FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
> hfilescanner#seekBefore returns wrong result.
> The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-11234:
-

Status: Patch Available  (was: Open)

> FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
> 
>
> Key: HBASE-11234
> URL: https://issues.apache.org/jira/browse/HBASE-11234
> Project: HBase
>  Issue Type: Bug
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.99.0
>
> Attachments: HBASE-11234.patch
>
>
> As Ted found, 
> With this change:
> {noformat}
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
> ===
> --- 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(revision 1596579)
> +++ 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(working copy)
> @@ -51,6 +51,7 @@
>  import org.apache.hadoop.hbase.filter.FilterList.Operator;
>  import org.apache.hadoop.hbase.filter.PageFilter;
>  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
> +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
>  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
>  import org.apache.hadoop.hbase.io.hfile.HFileContext;
>  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
> @@ -90,6 +91,7 @@
>  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
>  HFileContextBuilder hcBuilder = new HFileContextBuilder();
>  hcBuilder.withBlockSize(2 * 1024);
> +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
>  HFileContext hFileContext = hcBuilder.build();
>  StoreFile.Writer writer = new StoreFile.WriterBuilder(
>  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
> {noformat}
> I got:
> java.lang.AssertionError: 
> expected: 
> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
> After debugging, it seems the method of 
> FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
> hfilescanner#seekBefore returns wrong result.
> The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-11234:
-

Description: 
As Ted found, 
{noformat}
With this change:

Index: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
===
--- 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (revision 1596579)
+++ 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (working copy)
@@ -51,6 +51,7 @@
 import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.PageFilter;
 import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
 import org.apache.hadoop.hbase.io.hfile.HFileContext;
 import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
@@ -90,6 +91,7 @@
 CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
 HFileContextBuilder hcBuilder = new HFileContextBuilder();
 hcBuilder.withBlockSize(2 * 1024);
+hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
 HFileContext hFileContext = hcBuilder.build();
 StoreFile.Writer writer = new StoreFile.WriterBuilder(
 TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(

I got:

java.lang.AssertionError: 
expected: but 
was:
  at org.junit.Assert.fail(Assert.java:88)
  at org.junit.Assert.failNotEquals(Assert.java:743)
  at org.junit.Assert.assertEquals(Assert.java:118)
  at org.junit.Assert.assertEquals(Assert.java:144)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
{noformat}


After debugging, it seems the method of FastDiffDeltaEncoder#getFirstKeyInBlock 
become broken. And it will cause hfilescanner#seekBefore returns wrong result.


The solution is simple, see the patch.

  was:
As Ted found, 
{format}
With this change:

Index: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
===
--- 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (revision 1596579)
+++ 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (working copy)
@@ -51,6 +51,7 @@
 import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.PageFilter;
 import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
 import org.apache.hadoop.hbase.io.hfile.HFileContext;
 import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
@@ -90,6 +91,7 @@
 CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
 HFileContextBuilder hcBuilder = new HFileContextBuilder();
 hcBuilder.withBlockSize(2 * 1024);
+hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
 HFileContext hFileContext = hcBuilder.build();
 StoreFile.Writer writer = new StoreFile.WriterBuilder(
 TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(

I got:

java.lang.AssertionError: 
expected: but 
was:
  at org.junit.Assert.fail(Assert.java:88)
  at org.junit.Assert.failNotEquals(Assert.java:743)
  at org.junit.Assert.assertEquals(Assert.java:118)
  at org.junit.Assert.assertEquals(Assert.java:144)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
{format}


After debugging, it seems the method of FastDiffDeltaEncoder#getFirstKeyInBlock 
become broken. And it will cause hfilescanner#seekBefore returns wrong result.


The solution is simple, see the patch.


> FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
> 
>
> Key: HBASE-11234
> URL: https://issues.apache.org/jira/browse/HBASE-11234
> Project: HBase
>  Issue Type: Bug
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.99.0
>
> Attachments: HBASE-11234.patch
>
>
> As Ted found, 
> {noformat}
> With this change:
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
> ==

[jira] [Updated] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-11234:
-

Description: 
As Ted found, 

With this change:
{noformat}
Index: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
===
--- 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (revision 1596579)
+++ 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (working copy)
@@ -51,6 +51,7 @@
 import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.PageFilter;
 import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
 import org.apache.hadoop.hbase.io.hfile.HFileContext;
 import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
@@ -90,6 +91,7 @@
 CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
 HFileContextBuilder hcBuilder = new HFileContextBuilder();
 hcBuilder.withBlockSize(2 * 1024);
+hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
 HFileContext hFileContext = hcBuilder.build();
 StoreFile.Writer writer = new StoreFile.WriterBuilder(
 TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
{noformat}
I got:

java.lang.AssertionError: 
expected: but 
was:
  at org.junit.Assert.fail(Assert.java:88)
  at org.junit.Assert.failNotEquals(Assert.java:743)
  at org.junit.Assert.assertEquals(Assert.java:118)
  at org.junit.Assert.assertEquals(Assert.java:144)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)



After debugging, it seems the method of FastDiffDeltaEncoder#getFirstKeyInBlock 
become broken. And it will cause hfilescanner#seekBefore returns wrong result.


The solution is simple, see the patch.

  was:
As Ted found, 
{noformat}
With this change:

Index: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
===
--- 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (revision 1596579)
+++ 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (working copy)
@@ -51,6 +51,7 @@
 import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.PageFilter;
 import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
 import org.apache.hadoop.hbase.io.hfile.HFileContext;
 import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
@@ -90,6 +91,7 @@
 CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
 HFileContextBuilder hcBuilder = new HFileContextBuilder();
 hcBuilder.withBlockSize(2 * 1024);
+hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
 HFileContext hFileContext = hcBuilder.build();
 StoreFile.Writer writer = new StoreFile.WriterBuilder(
 TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(

I got:

java.lang.AssertionError: 
expected: but 
was:
  at org.junit.Assert.fail(Assert.java:88)
  at org.junit.Assert.failNotEquals(Assert.java:743)
  at org.junit.Assert.assertEquals(Assert.java:118)
  at org.junit.Assert.assertEquals(Assert.java:144)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
{noformat}


After debugging, it seems the method of FastDiffDeltaEncoder#getFirstKeyInBlock 
become broken. And it will cause hfilescanner#seekBefore returns wrong result.


The solution is simple, see the patch.


> FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
> 
>
> Key: HBASE-11234
> URL: https://issues.apache.org/jira/browse/HBASE-11234
> Project: HBase
>  Issue Type: Bug
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.99.0
>
> Attachments: HBASE-11234.patch
>
>
> As Ted found, 
> With this change:
> {noformat}
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
> ==

[jira] [Updated] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-11234:
-

Attachment: HBASE-11234.patch

> FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
> 
>
> Key: HBASE-11234
> URL: https://issues.apache.org/jira/browse/HBASE-11234
> Project: HBase
>  Issue Type: Bug
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.99.0
>
> Attachments: HBASE-11234.patch
>
>
> As Ted found, 
> {format}
> With this change:
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
> ===
> --- 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(revision 1596579)
> +++ 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
>(working copy)
> @@ -51,6 +51,7 @@
>  import org.apache.hadoop.hbase.filter.FilterList.Operator;
>  import org.apache.hadoop.hbase.filter.PageFilter;
>  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
> +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
>  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
>  import org.apache.hadoop.hbase.io.hfile.HFileContext;
>  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
> @@ -90,6 +91,7 @@
>  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
>  HFileContextBuilder hcBuilder = new HFileContextBuilder();
>  hcBuilder.withBlockSize(2 * 1024);
> +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
>  HFileContext hFileContext = hcBuilder.build();
>  StoreFile.Writer writer = new StoreFile.WriterBuilder(
>  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
> I got:
> java.lang.AssertionError: 
> expected: 
> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
>   at 
> org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
> {format}
> After debugging, it seems the method of 
> FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
> hfilescanner#seekBefore returns wrong result.
> The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)
chunhui shen created HBASE-11234:


 Summary: FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong 
result
 Key: HBASE-11234
 URL: https://issues.apache.org/jira/browse/HBASE-11234
 Project: HBase
  Issue Type: Bug
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.99.0


As Ted found, 
{format}
With this change:

Index: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
===
--- 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (revision 1596579)
+++ 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (working copy)
@@ -51,6 +51,7 @@
 import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.PageFilter;
 import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
 import org.apache.hadoop.hbase.io.hfile.HFileContext;
 import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
@@ -90,6 +91,7 @@
 CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
 HFileContextBuilder hcBuilder = new HFileContextBuilder();
 hcBuilder.withBlockSize(2 * 1024);
+hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
 HFileContext hFileContext = hcBuilder.build();
 StoreFile.Writer writer = new StoreFile.WriterBuilder(
 TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(

I got:

java.lang.AssertionError: 
expected: but 
was:
  at org.junit.Assert.fail(Assert.java:88)
  at org.junit.Assert.failNotEquals(Assert.java:743)
  at org.junit.Assert.assertEquals(Assert.java:118)
  at org.junit.Assert.assertEquals(Assert.java:144)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
{format}


After debugging, it seems the method of FastDiffDeltaEncoder#getFirstKeyInBlock 
become broken. And it will cause hfilescanner#seekBefore returns wrong result.


The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11093) FilterList#filterRow() iterates through its filters even though FilterList#hasFilterRow() returns false

2014-04-29 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13984120#comment-13984120
 ] 

chunhui shen commented on HBASE-11093:
--

Patch v2 lgtm, +1

> FilterList#filterRow() iterates through its filters even though 
> FilterList#hasFilterRow() returns false
> ---
>
> Key: HBASE-11093
> URL: https://issues.apache.org/jira/browse/HBASE-11093
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 11093-v1.txt, 11093-v2.txt
>
>
> FilterList#hasFilterRow() returns false when hasFilterRow() of all of 
> constituent filters return false.
> However, FilterList#filterRow() still iterates through its filters in this 
> scenario.
> The iteration should be skipped when FilterList#hasFilterRow() returns false.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11093) FilterList#filterRow() iterates through its filters even though FilterList#hasFilterRow() returns false

2014-04-28 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13984014#comment-13984014
 ] 

chunhui shen edited comment on HBASE-11093 at 4/29/14 5:10 AM:
---

Is it an improvement, rather than a bug?

Agree wtih comments ram mentioned:
"false == hasFilterRow " seems could be replaced with 
"!hasFilterRow.booleanValue()"
and reset it in reset() method



was (Author: zjushch):
Is it an improvement, rather than a bug?

As ram mentioned:
"false == hasFilterRow " seems could be replaced with 
"!hasFilterRow.booleanValue()"
and reset it in reset() method


> FilterList#filterRow() iterates through its filters even though 
> FilterList#hasFilterRow() returns false
> ---
>
> Key: HBASE-11093
> URL: https://issues.apache.org/jira/browse/HBASE-11093
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 11093-v1.txt
>
>
> FilterList#hasFilterRow() returns false when hasFilterRow() of all of 
> constituent filters return false.
> However, FilterList#filterRow() still iterates through its filters in this 
> scenario.
> The iteration should be skipped when FilterList#hasFilterRow() returns false.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11093) FilterList#filterRow() iterates through its filters even though FilterList#hasFilterRow() returns false

2014-04-28 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13984014#comment-13984014
 ] 

chunhui shen commented on HBASE-11093:
--

Is it an improvement, rather than a bug?

As ram mentioned:
"false == hasFilterRow " seems could be replaced with 
"!hasFilterRow.booleanValue()"
and reset it in reset() method


> FilterList#filterRow() iterates through its filters even though 
> FilterList#hasFilterRow() returns false
> ---
>
> Key: HBASE-11093
> URL: https://issues.apache.org/jira/browse/HBASE-11093
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 11093-v1.txt
>
>
> FilterList#hasFilterRow() returns false when hasFilterRow() of all of 
> constituent filters return false.
> However, FilterList#filterRow() still iterates through its filters in this 
> scenario.
> The iteration should be skipped when FilterList#hasFilterRow() returns false.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11018) ZKUtil.getChildDataAndWatchForNewChildren() will not return null as indicated

2014-04-21 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976285#comment-13976285
 ] 

chunhui shen commented on HBASE-11018:
--

+1

> ZKUtil.getChildDataAndWatchForNewChildren() will not return null as indicated
> -
>
> Key: HBASE-11018
> URL: https://issues.apache.org/jira/browse/HBASE-11018
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Affects Versions: 0.96.1, 0.98.1
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-11018-trunk.patch
>
>
> While working on HBase acl, I found out that 
> ZKUtil.getChildDataAndWatchForNewChildren() will not return null as 
> indicated.  Here is the code:
> {code}
>  /**
>   
>* Returns null if the specified node does not exist.  Otherwise returns a
>* list of children of the specified node.  If the node exists but it has no
>* children, an empty list will be returned.
>   
>*/
>   public static List getChildDataAndWatchForNewChildren(
>   ZooKeeperWatcher zkw, String baseNode) throws KeeperException {
> List nodes =
>   ZKUtil.listChildrenAndWatchForNewChildren(zkw, baseNode);
> List newNodes = new ArrayList();
> if (nodes != null) {
>   for (String node : nodes) {
> String nodePath = ZKUtil.joinZNode(baseNode, node);
> byte[] data = ZKUtil.getDataAndWatch(zkw, nodePath);
> newNodes.add(new NodeAndData(nodePath, data));
>   }
> }
> return newNodes;
>   }
> {code}
> We return 'newNodes' which will never be null.
> This is a deprecated method.  But it is still used in HBase code.
> For example: 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.start()
> {code}
>   public void start() throws KeeperException {
> watcher.registerListener(this);
> if (ZKUtil.watchAndCheckExists(watcher, aclZNode)) {
>   List existing =
>   ZKUtil.getChildDataAndWatchForNewChildren(watcher, aclZNode);
>   if (existing != null) {
> refreshNodes(existing);
>   }
> }
>   }
> {code}
> We test the 'null' return from the call which becomes the problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11036) Online schema change with region merge may cause data loss

2014-04-19 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13974780#comment-13974780
 ] 

chunhui shen commented on HBASE-11036:
--

Once region merge completed, the old regions are removed from meta table. 
(Completed on 2014-04-16 18:26:37,352)
Why Schema-Change thread will get the regions that were merged from meta? 
(Happen on 2014-04-16 18:32:42,113)
I doubt that data loss on META table or table lock is not effective at that 
time.

> Online schema change with region merge may cause data loss 
> ---
>
> Key: HBASE-11036
> URL: https://issues.apache.org/jira/browse/HBASE-11036
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
> Fix For: 0.99.0, 0.98.2
>
>
> We have found out that online schema change and region merges may still cause 
> issues about merged regions coming back online, and thus causing data loss. 
> Recently ITBLL failed reporting 800K missing rows out of 720M. We've been 
> running this test for some extended period of time, and this is the first we 
> are seeing it, meaning that it is more rare. But it is still concerning. 
> From master's log:
> The merge has happened:
> {code}
> 2014-04-16 18:26:37,247 INFO  [AM.ZK.Worker-pool2-t73] 
> master.AssignmentManager: Handled MERGED event; 
> merged=IntegrationTestBigLinkedList,\xB2\xFE\x03s,1397672795119.80159738a0167e20a2e29fb2c46702f2.,
>  
>   
> region_a=IntegrationTestBigLinkedList,\xB2\xFE\x03s,1397670978959.0ac116e4d7da87922d3a8f218ca21079.,
>  
>   
> region_b=IntegrationTestBigLinkedList,\xB8\x03\x94\x15,1397672587456.1265d06478082ced65dd9a0c1c2b63c2.,
>  on hor13n03.gq1.ygridcore.net,60020,1397672668647
> {code}
> The region server shows the merge is complete: 
> {code}
> 2014-04-16 18:26:37,352 INFO  [regionserver60020-merges-1397672794972] 
> regionserver.RegionMergeRequest: Regions merged, hbase:meta updated, and 
> report to master. 
> region_a=IntegrationTestBigLinkedList,\xB2\xFE\x03s,1397670978959.0ac116e4d7da87922d3a8f218ca21079.,
>  
> region_b=IntegrationTestBigLinkedList,\xB8\x03\x94\x15,1397672587456.1265d06478082ced65dd9a0c1c2b63c2..
>  Region merge took 2sec
> {code}
> The new region was online on the region server for some time: 
> {code}
> 2014-04-16 18:31:22,858 DEBUG [RS_OPEN_REGION-hor13n03:60020-1] 
> handler.OpenRegionHandler: Opened 
> IntegrationTestBigLinkedList,\xB2\xFE\x03s,1397672795119.80159738a0167e20a2e29fb2c46702f2.
>  on hor13n03.gq1.ygridcore.net,60020,1397672668647
> {code}
> Then the region server was killed around 2014-04-16 18:31:26,254. 
> The master started log splitting etc for the dead RS:
> {code}
> 2014-04-16 18:31:28,942 INFO  [MASTER_SERVER_OPERATIONS-hor13n02:6-3] 
> handler.ServerShutdownHandler: Splitting logs for 
> hor13n03.gq1.ygridcore.net,60020,1397672668647 before assignment.
> ..
> 2014-04-16 18:31:40,887 INFO  [MASTER_SERVER_OPERATIONS-hor13n02:6-3] 
> master.RegionStates: Transitioned {80159738a0167e20a2e29fb2c46702f2 
> state=OPEN, ts=1397673082874, 
> server=hor13n03.gq1.ygridcore.net,60020,1397672668647} to 
> {80159738a0167e20a2e29fb
> 2014-04-16 18:31:40,887 INFO  [MASTER_SERVER_OPERATIONS-hor13n02:6-3] 
> master.RegionStates: Offlined 80159738a0167e20a2e29fb2c46702f2 from 
> hor13n03.gq1.ygridcore.net,60020,1397672668647
> {code}
> But this region was not assigned again at all. Instead, the master restarted 
> shortly after, and reassigned the regions that were merged already: 
> {code}
> 2014-04-16 18:34:02,569 INFO  [master:hor13n02:6] 
> master.ActiveMasterManager: Registered Active 
> Master=hor13n02.gq1.ygridcore.net,6,1397673241215
> ...
> 2014-04-16 18:34:10,412 INFO  [master:hor13n02:6] 
> master.AssignmentManager: Found regions out on cluster or in RIT; presuming 
> failover
> 2014-04-16 18:34:10,412 WARN  [master:hor13n02:6] master.ServerManager: 
> Expiration of hor13n03.gq1.ygridcore.net,60020,1397671753021 but server not 
> online
> ..
> 2014-04-16 18:34:10,880 INFO  [MASTER_SERVER_OPERATIONS-hor13n02:6-3] 
> master.AssignmentManager: Bulk assigning 28 region(s) across 4 server(s), 
> round-robin=true
> ..
> 2014-04-16 18:34:10,882 DEBUG 
> [hor13n02.gq1.ygridcore.net,6,1397673241215-GeneralBulkAssigner-1] 
> zookeeper.ZKAssign: master:6-0x2456a4863640255, 
> quorum=hor13n04.gq1.ygridcore.net:2181,hor13n03.gq1.ygridcore.net:2181,hor13n20.gq1.ygridcore.net:2181,
>  baseZNode=/hbase Async create of unassigned node 0ac116e4d7da8792..
> ..
> 2014-04-16 18:34:13,077 INFO  [AM.ZK.Worker-pool2-t7] master.RegionStates: 
> Onlined 0ac116e4d7da87922d3a8f218ca21079 on 
> hor13n04.gq1.ygridcore.net,60020,1397672685370
> {code}
> Before the master went down, there were other logs that indicate something 
> funky: 
> {code}
> 2014-04-16 18:32:42,113 DEBUG 
>

[jira] [Commented] (HBASE-10995) Fix resource leak related to unclosed HBaseAdmin

2014-04-16 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13970595#comment-13970595
 ] 

chunhui shen commented on HBASE-10995:
--

MasterStatusServlet  and Master will share the same connection.
If called twice, the connection will be closed really. However master should 
still use this connection.

> Fix resource leak related to unclosed HBaseAdmin
> 
>
> Key: HBASE-10995
> URL: https://issues.apache.org/jira/browse/HBASE-10995
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10995-v1.txt
>
>
> This issue fixes 3 cases where HBaseAdmin is left not closed.
> Here are the files involved:
> hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterStatusServlet.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/util/HMerge.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10995) Fix resource leak related to unclosed HBaseAdmin

2014-04-15 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13970414#comment-13970414
 ] 

chunhui shen commented on HBASE-10995:
--

In MasterStatusTmpl.jamon, deleteConnection is called
{code}
<%java>
   HConnectionManager.deleteConnection(admin.getConfiguration());

{code}

Will it close connection twice if call admin.close() in MasterStatusServlet ?

> Fix resource leak related to unclosed HBaseAdmin
> 
>
> Key: HBASE-10995
> URL: https://issues.apache.org/jira/browse/HBASE-10995
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10995-v1.txt
>
>
> This issue fixes 3 cases where HBaseAdmin is left not closed.
> Here are the files involved:
> hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterStatusServlet.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/util/HMerge.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10956) Upgrade hadoop-2 dependency to 2.4.0

2014-04-11 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13967417#comment-13967417
 ] 

chunhui shen commented on HBASE-10956:
--

+1

> Upgrade hadoop-2 dependency to 2.4.0
> 
>
> Key: HBASE-10956
> URL: https://issues.apache.org/jira/browse/HBASE-10956
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10956-v1.txt
>
>
> Hadoop 2.4.0 has been released:
> http://search-hadoop.com/m/LgpTk2YKhUf
> This JIRA is to upgrade hadoop-2 dependency to 2.4.0



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2014-04-09 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13964901#comment-13964901
 ] 

chunhui shen commented on HBASE-6618:
-

bq.not sure I got the question, sorry: which checks?
{code}
+ * 
+ *   NOTE that currently no checks are performed to ensure that length of 
ranges lower bytes and
+ *   ranges upper bytes match mask length. Filter may work incorrectly or fail 
(with runtime
+ *   exceptions) if this is broken.
+ * 
+ *
+ * 
+ *   NOTE that currently no checks are performed to ensure that ranges are 
defined correctly (i.e.
+ *   lower value of each range is not greater than upper value). Filter may 
work incorrectly or fail
+ *   (with runtime exceptions) if this is broken.
+ * 
+ *
+ * 
+ *   NOTE that currently no checks are performed to ensure that at non-fixed 
positions in
+ *   ranges lower bytes and ranges upper bytes zeroes are set, but 
implementation may rely on this.
+ * 
{code}
I mean the above checks


> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, 
> HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path, HBASE-6618_4.patch, 
> HBASE-6618_5.patch
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2014-04-09 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13964897#comment-13964897
 ] 

chunhui shen edited comment on HBASE-6618 at 4/10/14 2:02 AM:
--

bq.you have to somehow define how to put "?" if I want it as "normal byte".
We could use '\' before '?' to define the normal byte '?'

As my consideration,  user could construct FuzzyRowFilter with the readable 
String directly.
e.g  {noformat}???11??AA\x00??\?{noformat}
Using Bytes.toBytesBinary to convert the string to bytes, then parse the bytes, 
if the byte is '?', mark it as non-fixed byte, if the byte is '\', skip it and 
see the next byte, and so on

Of course, if user want to make '\x00' as 4 bytes, the above seems wrong.  
For this case, we should also support constructing FuzzyRowFilter with the 
readable byte array.
For example,  {noformat}???11??AA\x00??\? {noformat} => 
byte[0]='?'
byte[1]='?'
byte[2]='?'
byte[3]='1'
byte[4]='1'
byte[5]='?'
byte[6]='?'
byte[7]='A'
byte[8]='A'
byte[9]=0
byte[10]='?'
byte[11]='?'
byte[12]='\'
byte[13]='?'

Correct me if something wrong :)



was (Author: zjushch):
bq.you have to somehow define how to put "?" if I want it as "normal byte".
We could use '\' before '?' to define the normal byte '?'

As my consideration,  user could construct FuzzyRowFilter with the readable 
String directly.
e.g "???11??AA\x00??\?"
Using Bytes.toBytesBinary to convert the string to bytes, then parse the bytes, 
if the byte is '?', mark it as non-fixed byte, if the byte is '\', skip it and 
see the next byte, and so on

Of course, if user want to make '\x00' as 4 bytes, the above seems wrong.  
For this case, we should also support constructing FuzzyRowFilter with the 
readable byte array.
For example, "???11??AA\x00??\?" => 
byte[0]='?'
byte[1]='?'
byte[2]='?'
byte[3]='1'
byte[4]='1'
byte[5]='?'
byte[6]='?'
byte[7]='A'
byte[8]='A'
byte[9]=0
byte[10]='?'
byte[11]='?'
byte[12]='\'
byte[13]='?'

Correct me if something wrong :)


> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, 
> HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path, HBASE-6618_4.patch, 
> HBASE-6618_5.patch
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2014-04-09 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13964897#comment-13964897
 ] 

chunhui shen commented on HBASE-6618:
-

bq.you have to somehow define how to put "?" if I want it as "normal byte".
We could use '\' before '?' to define the normal byte '?'

As my consideration,  user could construct FuzzyRowFilter with the readable 
String directly.
e.g "???11??AA\x00??\?"
Using Bytes.toBytesBinary to convert the string to bytes, then parse the bytes, 
if the byte is '?', mark it as non-fixed byte, if the byte is '\', skip it and 
see the next byte, and so on

Of course, if user want to make '\x00' as 4 bytes, the above seems wrong.  
For this case, we should also support constructing FuzzyRowFilter with the 
readable byte array.
For example, "???11??AA\x00??\?" => 
byte[0]='?'
byte[1]='?'
byte[2]='?'
byte[3]='1'
byte[4]='1'
byte[5]='?'
byte[6]='?'
byte[7]='A'
byte[8]='A'
byte[9]=0
byte[10]='?'
byte[11]='?'
byte[12]='\'
byte[13]='?'

Correct me if something wrong :)


> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, 
> HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path, HBASE-6618_4.patch, 
> HBASE-6618_5.patch
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2014-04-08 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13963753#comment-13963753
 ] 

chunhui shen commented on HBASE-6618:
-

FuzzyRowFilter seems available only for advanced users.

Should we support creating FuzzyRowFilter with the Human-Readable String, e.g. 
_0004_??
There are many notion when using FuzzyRowFilter, could we do these checks in 
the internal of FuzzyRowFilter ?



> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, 
> HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path, HBASE-6618_4.patch, 
> HBASE-6618_5.patch
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10906) Change error log for NamingException in TableInputFormatBase to WARN level

2014-04-03 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959532#comment-13959532
 ] 

chunhui shen commented on HBASE-10906:
--

lgtm
+1

> Change error log for NamingException in TableInputFormatBase to WARN level
> --
>
> Key: HBASE-10906
> URL: https://issues.apache.org/jira/browse/HBASE-10906
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10906-v1.txt
>
>
> Over in this thread:
> http://search-hadoop.com/m/DHED4Qp3ho/HBase+resolveDns+error+in+log&subj=HBase+resolveDns+error+in+log
> Amit mentioned that despite the error log, mapreduce job executed 
> successfully.
> The log level should be lowered to WARN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10850) essential column family optimization is broken

2014-04-03 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13958692#comment-13958692
 ] 

chunhui shen commented on HBASE-10850:
--

Thanks.  I see :)




> essential column family optimization is broken
> --
>
> Key: HBASE-10850
> URL: https://issues.apache.org/jira/browse/HBASE-10850
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, Filters, Performance
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 0.98.1, 0.99.0, 0.96.3
>
> Attachments: 10850-hasFilterRow-v1.txt, 10850-hasFilterRow-v2.txt, 
> 10850-hasFilterRow-v3.txt, 10850-v4.txt, 10850-v5.txt, 10850-v6.txt, 
> 10850-v7.txt, HBASE-10850-96.patch, HBASE-10850.patch, HBASE-10850_V2.patch, 
> HBaseSingleColumnValueFilterTest.java, TestWithMiniCluster.java
>
>
> When using the filter SingleColumnValueFilter, and depending of the columns 
> specified in the scan (filtering column always specified), the results can be 
> different.
> Here is an example.
> Suppose the following table:
> ||key||a:foo||a:bar||b:foo||b:bar||
> |1|false|_flag_|_flag_|_flag_|
> |2|true|_flag_|_flag_|_flag_|
> |3| |_flag_|_flag_|_flag_|
> With this filter:
> {code}
> SingleColumnValueFilter filter = new 
> SingleColumnValueFilter(Bytes.toBytes("a"), Bytes.toBytes("foo"), 
> CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes("false")));
> filter.setFilterIfMissing(true);
> {code}
> Depending of how I specify the list of columns to add in the scan, the result 
> is different. Yet, all examples below should always return only the first row 
> (key '1'):
> OK:
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> scan.addFamily(Bytes.toBytes("b"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("foo"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("bar"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> {code}
> This is a regression as it was working properly on HBase 0.92.
> You will find in attachement the unit tests reproducing the issue.
> +The analysis of this issue lead us to 2 critical bugs induced in 96 and 
> above versions+
> 1. The essential family optimization is broken in some cases.  In case of 
> condition on some families, we 1st will read those KVs and apply condition on 
> those, when the condition says to filter out that row, we will not go ahead 
> and fetch data from remaining non essential CFs. But now in most of the cases 
> we will do this unwanted data read which is fully against this optimization
> 2. We have a CP hook postFilterRow() which will be called when a row is 
> getting filtered out by the Filter.  This gives the CP to do a reseek to the 
> next known row which it thinks can evaluate the condition to true. But 
> currently in 96+ code , this hook is not getting called.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10705) CompactionRequest#toString() may throw NullPointerException

2014-04-03 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13958679#comment-13958679
 ] 

chunhui shen commented on HBASE-10705:
--

looks good, +1

> CompactionRequest#toString() may throw NullPointerException
> ---
>
> Key: HBASE-10705
> URL: https://issues.apache.org/jira/browse/HBASE-10705
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.0
>Reporter: Ted Yu
>Assignee: Rekha Joshi
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HBASE-10705.1.patch
>
>
> I found the following in 
> hbase-server/target/surefire-reports/org.apache.hadoop.hbase.util.TestMergeTable-output.txt
>  :
> {code}
> 2014-03-08 01:22:35,311 INFO  [IPC Server handler 0 on 39151] 
> blockmanagement.BlockManager(1009): BLOCK* addToInvalidates: 
> blk_1073741852_1028 127.0.0.1:58684
> 2014-03-08 01:22:35,312 INFO  
> [RS:0;kiyo:45971-shortCompactions-1394241753752] regionserver.HRegion(1393): 
> compaction interrupted
> java.io.InterruptedIOException: Aborting compaction of store contents in 
> region test,,1394241738901.edbcdf3be9dd27c52b1fca1b09a5a582. because it was 
> interrupted.
> at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:81)
> at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:109)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1131)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1390)
> at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:475)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> 2014-03-08 01:22:35,314 DEBUG [RS_CLOSE_REGION-kiyo:45971-0] 
> regionserver.HRegion(1069): Updates disabled for region 
> test,,1394241738901.edbcdf3be9dd27c52b1fca1b09a5a582.
> 2014-03-08 01:22:35,316 INFO  
> [StoreCloserThread-test,,1394241738901.edbcdf3be9dd27c52b1fca1b09a5a582.-1] 
> regionserver.HStore(793): Closed contents
> 2014-03-08 01:22:35,316 ERROR 
> [RS:0;kiyo:45971-shortCompactions-1394241753752] 
> regionserver.CompactSplitThread$CompactionRunner(496): Compaction failed 
> Request = regionName=test,,1394241738901.edbcdf3be9dd27c52b1fca1b09a5a582., 
> storeName=contents, fileCount=7, fileSize=71.3 M (10.2 M, 10.2 M, 10.2 M, 
> 10.2 M, 10.2 M), priority=3, time=8144240699213330
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest$2.apply(CompactionRequest.java:213)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest$2.apply(CompactionRequest.java:211)
> at com.google.common.collect.Iterators$9.transform(Iterators.java:845)
> at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
> at com.google.common.base.Joiner.appendTo(Joiner.java:125)
> at com.google.common.base.Joiner.appendTo(Joiner.java:186)
> at com.google.common.base.Joiner.join(Joiner.java:243)
> at com.google.common.base.Joiner.join(Joiner.java:232)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.toString(CompactionRequest.java:204)
> at java.lang.String.valueOf(String.java:2854)
> at java.lang.StringBuilder.append(StringBuilder.java:128)
> at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.toString(CompactSplitThread.java:425)
> at java.lang.String.valueOf(String.java:2854)
> at java.lang.StringBuilder.append(StringBuilder.java:128)
> at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:477)
> {code}
> The exception came from apply() method:
> {code}
>   }), new Function() {
> public String apply(StoreFile sf) {
>   return StringUtils.humanReadableInt(sf.getReader().length());
> }
>   }));
> {code}
> Looks like sf.getReader() might become null when 
> StringUtils.humanReadableInt() is called



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10850) essential column family optimization is broken

2014-04-03 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13958678#comment-13958678
 ] 

chunhui shen commented on HBASE-10850:
--

What error will occur if using 
{code}  if ((results.isEmpty()|| filterRow()) {{code} 

instead of

{code}
+  if ((isEmptyRow || ret == FilterWrapper.FilterRowRetCode.EXCLUDE) || 
filterRow()) {
{code} 

Just a doubt since I'm not very clear.

Path is good.

> essential column family optimization is broken
> --
>
> Key: HBASE-10850
> URL: https://issues.apache.org/jira/browse/HBASE-10850
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, Filters, Performance
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 0.98.1, 0.99.0, 0.96.3
>
> Attachments: 10850-hasFilterRow-v1.txt, 10850-hasFilterRow-v2.txt, 
> 10850-hasFilterRow-v3.txt, 10850-v4.txt, 10850-v5.txt, 10850-v6.txt, 
> 10850-v7.txt, HBASE-10850-96.patch, HBASE-10850.patch, HBASE-10850_V2.patch, 
> HBaseSingleColumnValueFilterTest.java, TestWithMiniCluster.java
>
>
> When using the filter SingleColumnValueFilter, and depending of the columns 
> specified in the scan (filtering column always specified), the results can be 
> different.
> Here is an example.
> Suppose the following table:
> ||key||a:foo||a:bar||b:foo||b:bar||
> |1|false|_flag_|_flag_|_flag_|
> |2|true|_flag_|_flag_|_flag_|
> |3| |_flag_|_flag_|_flag_|
> With this filter:
> {code}
> SingleColumnValueFilter filter = new 
> SingleColumnValueFilter(Bytes.toBytes("a"), Bytes.toBytes("foo"), 
> CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes("false")));
> filter.setFilterIfMissing(true);
> {code}
> Depending of how I specify the list of columns to add in the scan, the result 
> is different. Yet, all examples below should always return only the first row 
> (key '1'):
> OK:
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> scan.addFamily(Bytes.toBytes("b"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("foo"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("bar"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> {code}
> This is a regression as it was working properly on HBase 0.92.
> You will find in attachement the unit tests reproducing the issue.
> +The analysis of this issue lead us to 2 critical bugs induced in 96 and 
> above versions+
> 1. The essential family optimization is broken in some cases.  In case of 
> condition on some families, we 1st will read those KVs and apply condition on 
> those, when the condition says to filter out that row, we will not go ahead 
> and fetch data from remaining non essential CFs. But now in most of the cases 
> we will do this unwanted data read which is fully against this optimization
> 2. We have a CP hook postFilterRow() which will be called when a row is 
> getting filtered out by the Filter.  This gives the CP to do a reseek to the 
> next known row which it thinks can evaluate the condition to true. But 
> currently in 96+ code , this hook is not getting called.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work

2014-03-31 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955995#comment-13955995
 ] 

chunhui shen commented on HBASE-10848:
--

lgtm
+1 on v4

> Filter SingleColumnValueFilter combined with NullComparator does not work
> -
>
> Key: HBASE-10848
> URL: https://issues.apache.org/jira/browse/HBASE-10848
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
> Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, 
> HBASE_10848-v3.patch, HBASE_10848-v4.patch, HBaseRegression.java, 
> TestScanWithNullComparable.java
>
>
> I want to filter out from the scan the rows that does not have a specific 
> column qualifier. For this purpose I use the filter SingleColumnValueFilter 
> combined with the NullComparator.
> But every time I use this in a scan, I get the following exception:
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47)
> at 
> com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry 
> of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> ... 25 more
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

[jira] [Commented] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work

2014-03-27 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950261#comment-13950261
 ] 

chunhui shen commented on HBASE-10848:
--

Move the test case to TestFilter.java or  TestFromClientSide?

> Filter SingleColumnValueFilter combined with NullComparator does not work
> -
>
> Key: HBASE-10848
> URL: https://issues.apache.org/jira/browse/HBASE-10848
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
> Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, 
> HBaseRegression.java, TestScanWithNullComparable.java
>
>
> I want to filter out from the scan the rows that does not have a specific 
> column qualifier. For this purpose I use the filter SingleColumnValueFilter 
> combined with the NullComparator.
> But every time I use this in a scan, I get the following exception:
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47)
> at 
> com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry 
> of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> ... 25 more
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526

[jira] [Commented] (HBASE-10787) TestHCM#testConnection* take too long

2014-03-18 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13940097#comment-13940097
 ] 

chunhui shen commented on HBASE-10787:
--

+1

> TestHCM#testConnection* take too long
> -
>
> Key: HBASE-10787
> URL: https://issues.apache.org/jira/browse/HBASE-10787
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10787-v1.txt
>
>
> TestHCM#testConnectionClose takes more than 5 minutes on Apache Jenkins.
> The test can be shortened when retry count is lowered.
> On my Mac, for TestHCM#testConnection* (two tests)
> without patch:
> {code}
> Running org.apache.hadoop.hbase.client.TestHCM
> 2014-03-18 15:46:57.695 java[71368:1203] Unable to load realm info from 
> SCDynamicStore
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 242.2 sec
> {code}
> with patch:
> {code}
> Running org.apache.hadoop.hbase.client.TestHCM
> 2014-03-18 15:40:44.013 java[71184:1203] Unable to load realm info from 
> SCDynamicStore
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.465 sec
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10752) Port HBASE-10270 'Remove DataBlockEncoding from BlockCacheKey' to trunk

2014-03-18 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13940062#comment-13940062
 ] 

chunhui shen commented on HBASE-10752:
--

+1 on Patch-v3 since it won't have worry.

> Port HBASE-10270 'Remove DataBlockEncoding from BlockCacheKey' to trunk
> ---
>
> Key: HBASE-10752
> URL: https://issues.apache.org/jira/browse/HBASE-10752
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10752-v1.txt, 10752-v2.txt, 10752-v3.txt, 10752-v4.txt
>
>
> The JIRA removes the block encoding from the key and forces the caller of 
> HFileReaderV2.readBlock() to specify the expected BlockType as well as the 
> expected DataBlockEncoding when these matter. This allows for a decision on 
> either of these at read time instead of cache time, puts responsibility where 
> appropriate, fixes some cache misses when using the scan preloading (which 
> does a read without knowing the type or encoding), allows for the 
> BlockCacheKey to be re-used by the L2 BucketCache and sets us up for a future 
> CompoundScannerV2 which can read both un-encoded and encoded data blocks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10752) Port HBASE-10270 'Remove DataBlockEncoding from BlockCacheKey' to trunk

2014-03-18 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13940057#comment-13940057
 ] 

chunhui shen commented on HBASE-10752:
--

bq.I thought one issue was that after a schema change we might not have evicted 
all blocks when we scan the next time. So we'll have a scanner that does not 
understand the encoding.
It won't happen.  When reading, DataBlockEncoding information is recored in 
HFile, rather than decided by table schema.

{code}
+  public boolean useEncodedScanner(boolean isCompaction) {
+if (isCompaction && encoding == DataBlockEncoding.NONE) {
+  return false;
+}
+return encoding != DataBlockEncoding.NONE;
+  }
{code}
It seems equal to 
{code}
+  public boolean useEncodedScanner(boolean isCompaction) {
+return encoding != DataBlockEncoding.NONE;
+  }
{code}

> Port HBASE-10270 'Remove DataBlockEncoding from BlockCacheKey' to trunk
> ---
>
> Key: HBASE-10752
> URL: https://issues.apache.org/jira/browse/HBASE-10752
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10752-v1.txt, 10752-v2.txt, 10752-v3.txt, 10752-v4.txt
>
>
> The JIRA removes the block encoding from the key and forces the caller of 
> HFileReaderV2.readBlock() to specify the expected BlockType as well as the 
> expected DataBlockEncoding when these matter. This allows for a decision on 
> either of these at read time instead of cache time, puts responsibility where 
> appropriate, fixes some cache misses when using the scan preloading (which 
> does a read without knowing the type or encoding), allows for the 
> BlockCacheKey to be re-used by the L2 BucketCache and sets us up for a future 
> CompoundScannerV2 which can read both un-encoded and encoded data blocks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10752) Port HBASE-10270 'Remove DataBlockEncoding from BlockCacheKey' to trunk

2014-03-16 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13937476#comment-13937476
 ] 

chunhui shen commented on HBASE-10752:
--

For me, I think we don't need to do the 'DataBlockEncoding' check in Trunk. 
A cache key (File Name + offset) would map to an unique data block. (In 0.94, 
be able to map to two data blocks,so we need this check)
Thus, no need to pass the expected DataBlockEncoding to readBlock().

However,  I'm not very sure for the above point, and doing this seems no harm. 
Patch is good for me if no others give comment about it.

> Port HBASE-10270 'Remove DataBlockEncoding from BlockCacheKey' to trunk
> ---
>
> Key: HBASE-10752
> URL: https://issues.apache.org/jira/browse/HBASE-10752
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10752-v1.txt, 10752-v2.txt, 10752-v3.txt
>
>
> The JIRA removes the block encoding from the key and forces the caller of 
> HFileReaderV2.readBlock() to specify the expected BlockType as well as the 
> expected DataBlockEncoding when these matter. This allows for a decision on 
> either of these at read time instead of cache time, puts responsibility where 
> appropriate, fixes some cache misses when using the scan preloading (which 
> does a read without knowing the type or encoding), allows for the 
> BlockCacheKey to be re-used by the L2 BucketCache and sets us up for a future 
> CompoundScannerV2 which can read both un-encoded and encoded data blocks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10752) Port HBASE-10270 'Remove DataBlockEncoding from BlockCacheKey' to trunk

2014-03-15 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13936397#comment-13936397
 ] 

chunhui shen commented on HBASE-10752:
--

As my understanding about current Trunk,  a data block will only be cached 
using one fixed format (encoded or not-encoded).

In HColumnDescriptor
{code}
  public static final String ENCODE_ON_DISK = // To be removed, it is not used 
anymore
  "ENCODE_ON_DISK";
{code}

Also in HFileDataBlockEncoderImpl, we only have one variable ‘encoding’ rather 
than two variables ‘onDisk’ and ‘inCache’.

Thus, In Trunk,  I think we could remove 'DataBlockEncoding ' directly from 
BlockCacheKey. No necessary to check 'DataBlockEncoding' after reading block 
from cache.

In addition, it would be better if have a test to show the case 'there is a 
potential for caching them twice or missing the cache'  mentioned in HBASE-10270

> Port HBASE-10270 'Remove DataBlockEncoding from BlockCacheKey' to trunk
> ---
>
> Key: HBASE-10752
> URL: https://issues.apache.org/jira/browse/HBASE-10752
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10752-v1.txt, 10752-v2.txt, 10752-v3.txt
>
>
> The JIRA removes the block encoding from the key and forces the caller of 
> HFileReaderV2.readBlock() to specify the expected BlockType as well as the 
> expected DataBlockEncoding when these matter. This allows for a decision on 
> either of these at read time instead of cache time, puts responsibility where 
> appropriate, fixes some cache misses when using the scan preloading (which 
> does a read without knowing the type or encoding), allows for the 
> BlockCacheKey to be re-used by the L2 BucketCache and sets us up for a future 
> CompoundScannerV2 which can read both un-encoded and encoded data blocks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9355) HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem

2014-03-04 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13920499#comment-13920499
 ] 

chunhui shen commented on HBASE-9355:
-

lgtm
+1

> HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem
> -
>
> Key: HBASE-9355
> URL: https://issues.apache.org/jira/browse/HBASE-9355
> Project: HBase
>  Issue Type: Test
>Affects Versions: 0.92.2
>Reporter: Ted Yu
>Assignee: Rekha Joshi
>Priority: Minor
> Attachments: HBASE-9355.1.patch
>
>
> Here is related code:
> {code}
>   public boolean cleanupDataTestDirOnTestFS() throws IOException {
> boolean ret = getTestFileSystem().delete(dataTestDirOnTestFS, true);
> if (ret)
>   dataTestDirOnTestFS = null;
> return ret;
>   }
> {code}
> The FileSystem returned by getTestFileSystem() is not closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9355) HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem

2014-03-04 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13920473#comment-13920473
 ] 

chunhui shen commented on HBASE-9355:
-

Is there any reason to close the file system?
>From the method name, it just need clean up data.

> HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem
> -
>
> Key: HBASE-9355
> URL: https://issues.apache.org/jira/browse/HBASE-9355
> Project: HBase
>  Issue Type: Test
>Affects Versions: 0.92.2
>Reporter: Ted Yu
>Assignee: Rekha Joshi
>Priority: Minor
> Attachments: HBASE-9355.1.patch
>
>
> Here is related code:
> {code}
>   public boolean cleanupDataTestDirOnTestFS() throws IOException {
> boolean ret = getTestFileSystem().delete(dataTestDirOnTestFS, true);
> if (ret)
>   dataTestDirOnTestFS = null;
> return ret;
>   }
> {code}
> The FileSystem returned by getTestFileSystem() is not closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10597) IOEngine#read() should return the number of bytes transferred

2014-02-23 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13910039#comment-13910039
 ] 

chunhui shen commented on HBASE-10597:
--

+1

> IOEngine#read() should return the number of bytes transferred
> -
>
> Key: HBASE-10597
> URL: https://issues.apache.org/jira/browse/HBASE-10597
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10597-v1.txt
>
>
> IOEngine#read() is called by BucketCache#getBlock().
> IOEngine#read() should return the number of bytes transferred so that 
> BucketCache#getBlock() can check this return value against the length 
> obtained from bucketEntry.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   3   4   5   6   7   8   9   10   >