[jira] [Commented] (HBASE-6048) Table Scan is failing if offheap cache enabled

2012-11-08 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493784#comment-13493784
 ] 

chunhui shen commented on HBASE-6048:
-

Yes, 0.94 has this problem~~~

> Table Scan is failing if offheap cache enabled
> --
>
> Key: HBASE-6048
> URL: https://issues.apache.org/jira/browse/HBASE-6048
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0
>Reporter: Gopinathan A
>Assignee: ramkrishna.s.vasudevan
>
> Table Scan is failing if offheap cache enabled.
> {noformat}
> 2012-05-18 20:03:38,446 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: 
> Initialized with CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
> 2012-05-18 20:03:38,446 INFO org.apache.hadoop.hbase.regionserver.StoreFile: 
> Delete Family Bloom filter type for 
> hdfs://10.18.40.217:9000/hbase/ufdr/1d4656fd417a07c9171a38b8f4d08510/.tmp/03742024b28f443bb63cfc338d4ca422:
>  CompoundBloomFilterWriter
> 2012-05-18 20:04:25,576 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: 
> Block cache LRU eviction started; Attempting to free 120.57 MB of 
> total=1020.57 MB
> 2012-05-18 20:04:25,655 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: 
> Block cache LRU eviction completed; freed=120.82 MB, total=907.89 MB, 
> single=1012.11 MB, multi=6.12 MB, memory=0 KB
> 2012-05-18 20:04:25,733 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner
> java.lang.IllegalStateException: Schema metrics requested before table/CF 
> name initialization: {"tableName":"null","cfName":"null"}
>   at 
> org.apache.hadoop.hbase.regionserver.metrics.SchemaConfigured.getSchemaMetrics(SchemaConfigured.java:182)
>   at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.updateSizeMetrics(LruBlockCache.java:310)
>   at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:274)
>   at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:293)
>   at 
> org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:102)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:296)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:213)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:455)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:475)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:130)
>   at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2001)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3274)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1604)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1596)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1572)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2310)
>   at sun.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376)
> 2012-05-18 20:04:25,828 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7046) Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493782#comment-13493782
 ] 

Hudson commented on HBASE-7046:
---

Integrated in HBase-TRUNK #3523 (See 
[https://builds.apache.org/job/HBase-TRUNK/3523/])
HBASE-7046 Fix resource leak in 
TestHLogSplit#testOldRecoveredEditsFileSidelined (Himanshu) (Revision 1407363)

 Result = FAILURE
tedyu : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogSplit.java


> Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined
> -
>
> Key: HBASE-7046
> URL: https://issues.apache.org/jira/browse/HBASE-7046
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 0.96.0
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
> Fix For: 0.96.0
>
> Attachments: HBASE-7046.patch
>
>
> This method creates a writer but never closes one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6048) Table Scan is failing if offheap cache enabled

2012-11-08 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493780#comment-13493780
 ] 

ramkrishna.s.vasudevan commented on HBASE-6048:
---

for 0.94 we need to fix this then right ? Actually we did not focus on this 
much as offheapcache was an experimental feature.

> Table Scan is failing if offheap cache enabled
> --
>
> Key: HBASE-6048
> URL: https://issues.apache.org/jira/browse/HBASE-6048
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0
>Reporter: Gopinathan A
>Assignee: ramkrishna.s.vasudevan
>
> Table Scan is failing if offheap cache enabled.
> {noformat}
> 2012-05-18 20:03:38,446 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: 
> Initialized with CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
> 2012-05-18 20:03:38,446 INFO org.apache.hadoop.hbase.regionserver.StoreFile: 
> Delete Family Bloom filter type for 
> hdfs://10.18.40.217:9000/hbase/ufdr/1d4656fd417a07c9171a38b8f4d08510/.tmp/03742024b28f443bb63cfc338d4ca422:
>  CompoundBloomFilterWriter
> 2012-05-18 20:04:25,576 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: 
> Block cache LRU eviction started; Attempting to free 120.57 MB of 
> total=1020.57 MB
> 2012-05-18 20:04:25,655 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: 
> Block cache LRU eviction completed; freed=120.82 MB, total=907.89 MB, 
> single=1012.11 MB, multi=6.12 MB, memory=0 KB
> 2012-05-18 20:04:25,733 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner
> java.lang.IllegalStateException: Schema metrics requested before table/CF 
> name initialization: {"tableName":"null","cfName":"null"}
>   at 
> org.apache.hadoop.hbase.regionserver.metrics.SchemaConfigured.getSchemaMetrics(SchemaConfigured.java:182)
>   at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.updateSizeMetrics(LruBlockCache.java:310)
>   at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:274)
>   at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:293)
>   at 
> org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:102)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:296)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:213)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:455)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:475)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:130)
>   at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2001)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3274)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1604)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1596)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1572)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2310)
>   at sun.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376)
> 2012-05-18 20:04:25,828 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4676) Prefix Compression - Trie data block encoding

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493779#comment-13493779
 ] 

Hadoop QA commented on HBASE-4676:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12552778/HBASE-4676-prefix-tree-trunk-v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 142 
new or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
103 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 58 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.io.hfile.TestHFileDataBlockEncoder

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3285//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3285//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3285//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3285//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3285//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3285//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3285//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3285//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3285//console

This message is automatically generated.

> Prefix Compression - Trie data block encoding
> -
>
> Key: HBASE-4676
> URL: https://issues.apache.org/jira/browse/HBASE-4676
> Project: HBase
>  Issue Type: New Feature
>  Components: io, Performance, regionserver
>Affects Versions: 0.96.0
>Reporter: Matt Corgan
>Assignee: Matt Corgan
> Attachments: HBASE-4676-0.94-v1.patch, 
> HBASE-4676-prefix-tree-trunk-v1.patch, HBASE-4676-prefix-tree-trunk-v2.patch, 
> HBASE-4676-prefix-tree-trunk-v3.patch, HBASE-4676-prefix-tree-trunk-v4.patch, 
> hbase-prefix-trie-0.1.jar, PrefixTrie_Format_v1.pdf, 
> PrefixTrie_Performance_v1.pdf, SeeksPerSec by blockSize.png
>
>
> The HBase data block format has room for 2 significant improvements for 
> applications that have high block cache hit ratios.  
> First, there is no prefix compression, and the current KeyValue format is 
> somewhat metadata heavy, so there can be tremendous memory bloat for many 
> common data layouts, specifically those with long keys and short values.
> Second, there is no random access to KeyValues inside data blocks.  This 
> means that every time you double the datablock size, average seek time (or 
> average cpu consumption) goes up by a factor of 2.  The standard 64KB block 
> size is ~10x slower for random seeks than a 4KB block size, but block sizes 
> as small as 4KB cause problems elsewhere.  Using block sizes of 256KB or 1MB 
> or more may be more efficient from a disk access and block-cache perspective 
> in many big-data applications, but doing so is infeasible from a random seek 
> perspective.
> The PrefixTrie block encoding format attempts to solve both of these 
> problems.  Some features:
> * trie format for row key encoding completely eliminates duplicate row keys 
> and encodes similar row keys into a standard trie structure which also saves 
> a lot of space
> * the column family is currently stored once at the beginning of each block.  
> this could easily be modified to allow multiple family names per block
> * all qualifiers in the block are stored in their own trie format which 
> caters nicely to wide rows.  duplicate qualifers between rows are eliminated. 
>  the size of this trie determines the width of the block's qualifier 
> fixed-width-i

[jira] [Commented] (HBASE-6048) Table Scan is failing if offheap cache enabled

2012-11-08 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493773#comment-13493773
 ] 

chunhui shen commented on HBASE-6048:
-

[~ram_krish]

We also catch this problem and created HBASE-7136, but I don't found the 
schemaMetrics in Trunk, so close it

> Table Scan is failing if offheap cache enabled
> --
>
> Key: HBASE-6048
> URL: https://issues.apache.org/jira/browse/HBASE-6048
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0
>Reporter: Gopinathan A
>Assignee: ramkrishna.s.vasudevan
>
> Table Scan is failing if offheap cache enabled.
> {noformat}
> 2012-05-18 20:03:38,446 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: 
> Initialized with CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
> 2012-05-18 20:03:38,446 INFO org.apache.hadoop.hbase.regionserver.StoreFile: 
> Delete Family Bloom filter type for 
> hdfs://10.18.40.217:9000/hbase/ufdr/1d4656fd417a07c9171a38b8f4d08510/.tmp/03742024b28f443bb63cfc338d4ca422:
>  CompoundBloomFilterWriter
> 2012-05-18 20:04:25,576 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: 
> Block cache LRU eviction started; Attempting to free 120.57 MB of 
> total=1020.57 MB
> 2012-05-18 20:04:25,655 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: 
> Block cache LRU eviction completed; freed=120.82 MB, total=907.89 MB, 
> single=1012.11 MB, multi=6.12 MB, memory=0 KB
> 2012-05-18 20:04:25,733 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner
> java.lang.IllegalStateException: Schema metrics requested before table/CF 
> name initialization: {"tableName":"null","cfName":"null"}
>   at 
> org.apache.hadoop.hbase.regionserver.metrics.SchemaConfigured.getSchemaMetrics(SchemaConfigured.java:182)
>   at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.updateSizeMetrics(LruBlockCache.java:310)
>   at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:274)
>   at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:293)
>   at 
> org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:102)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:296)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:213)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:455)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:475)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:130)
>   at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2001)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3274)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1604)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1596)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1572)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2310)
>   at sun.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376)
> 2012-05-18 20:04:25,828 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4676) Prefix Compression - Trie data block encoding

2012-11-08 Thread Matt Corgan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Corgan updated HBASE-4676:
---

Attachment: HBASE-4676-prefix-tree-trunk-v4.patch

Latest patch with hbase-common modifications from Stack's reviews: 
https://reviews.apache.org/r/7589/

> Prefix Compression - Trie data block encoding
> -
>
> Key: HBASE-4676
> URL: https://issues.apache.org/jira/browse/HBASE-4676
> Project: HBase
>  Issue Type: New Feature
>  Components: io, Performance, regionserver
>Affects Versions: 0.96.0
>Reporter: Matt Corgan
>Assignee: Matt Corgan
> Attachments: HBASE-4676-0.94-v1.patch, 
> HBASE-4676-prefix-tree-trunk-v1.patch, HBASE-4676-prefix-tree-trunk-v2.patch, 
> HBASE-4676-prefix-tree-trunk-v3.patch, HBASE-4676-prefix-tree-trunk-v4.patch, 
> hbase-prefix-trie-0.1.jar, PrefixTrie_Format_v1.pdf, 
> PrefixTrie_Performance_v1.pdf, SeeksPerSec by blockSize.png
>
>
> The HBase data block format has room for 2 significant improvements for 
> applications that have high block cache hit ratios.  
> First, there is no prefix compression, and the current KeyValue format is 
> somewhat metadata heavy, so there can be tremendous memory bloat for many 
> common data layouts, specifically those with long keys and short values.
> Second, there is no random access to KeyValues inside data blocks.  This 
> means that every time you double the datablock size, average seek time (or 
> average cpu consumption) goes up by a factor of 2.  The standard 64KB block 
> size is ~10x slower for random seeks than a 4KB block size, but block sizes 
> as small as 4KB cause problems elsewhere.  Using block sizes of 256KB or 1MB 
> or more may be more efficient from a disk access and block-cache perspective 
> in many big-data applications, but doing so is infeasible from a random seek 
> perspective.
> The PrefixTrie block encoding format attempts to solve both of these 
> problems.  Some features:
> * trie format for row key encoding completely eliminates duplicate row keys 
> and encodes similar row keys into a standard trie structure which also saves 
> a lot of space
> * the column family is currently stored once at the beginning of each block.  
> this could easily be modified to allow multiple family names per block
> * all qualifiers in the block are stored in their own trie format which 
> caters nicely to wide rows.  duplicate qualifers between rows are eliminated. 
>  the size of this trie determines the width of the block's qualifier 
> fixed-width-int
> * the minimum timestamp is stored at the beginning of the block, and deltas 
> are calculated from that.  the maximum delta determines the width of the 
> block's timestamp fixed-width-int
> The block is structured with metadata at the beginning, then a section for 
> the row trie, then the column trie, then the timestamp deltas, and then then 
> all the values.  Most work is done in the row trie, where every leaf node 
> (corresponding to a row) contains a list of offsets/references corresponding 
> to the cells in that row.  Each cell is fixed-width to enable binary 
> searching and is represented by [1 byte operationType, X bytes qualifier 
> offset, X bytes timestamp delta offset].
> If all operation types are the same for a block, there will be zero per-cell 
> overhead.  Same for timestamps.  Same for qualifiers when i get a chance.  
> So, the compression aspect is very strong, but makes a few small sacrifices 
> on VarInt size to enable faster binary searches in trie fan-out nodes.
> A more compressed but slower version might build on this by also applying 
> further (suffix, etc) compression on the trie nodes at the cost of slower 
> write speed.  Even further compression could be obtained by using all VInts 
> instead of FInts with a sacrifice on random seek speed (though not huge).
> One current drawback is the current write speed.  While programmed with good 
> constructs like TreeMaps, ByteBuffers, binary searches, etc, it's not 
> programmed with the same level of optimization as the read path.  Work will 
> need to be done to optimize the data structures used for encoding and could 
> probably show a 10x increase.  It will still be slower than delta encoding, 
> but with a much higher decode speed.  I have not yet created a thorough 
> benchmark for write speed nor sequential read speed.
> Though the trie is reaching a point where it is internally very efficient 
> (probably within half or a quarter of its max read speed) the way that hbase 
> currently uses it is far from optimal.  The KeyValueScanner and related 
> classes that iterate through the trie will eventually need to be smarter and 
> have methods to do things like skipping to the next row of results without 
> scanning every cell in between.  When that is

[jira] [Commented] (HBASE-7010) PrefixFilter should seek to first matching row

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493766#comment-13493766
 ] 

Hadoop QA commented on HBASE-7010:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552777/7010-experimental.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.filter.TestFilter

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3284//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3284//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3284//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3284//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3284//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3284//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3284//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3284//console

This message is automatically generated.

> PrefixFilter should seek to first matching row
> --
>
> Key: HBASE-7010
> URL: https://issues.apache.org/jira/browse/HBASE-7010
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7010-experimental.txt, 7010.txt
>
>
> Currently a PrefixFilter will happily scan all KVs < prefix.
> If should seek forward to the prefix if the current KV < prefix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7132) region_mover.rb should not require FS configurations to be set

2012-11-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-7132:
-

Fix Version/s: (was: 0.94.4)
   (was: 0.96.0)

> region_mover.rb should not require FS configurations to be set
> --
>
> Key: HBASE-7132
> URL: https://issues.apache.org/jira/browse/HBASE-7132
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
>
> On the 90-branch, you can run region_mover.rb without having FS-related 
> configurations set (i.e. fs.defaultFS, fs.default.name, hbase.rootdir).
> This is not the case against 0.92+.  The reason is that region_mover.rb calls:
> {code}
> r.getTableDesc().getName()
> {code}
> where r is an HRegionInfo.  In 0.92+ this actually reads off the filesystem, 
> which is unnecessary to just get the table name.
> I think copy_table.rb has the same issue, but haven't looked into it enough.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6958) TestAssignmentManager sometimes fails

2012-11-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6958:
-

Fix Version/s: (was: 0.94.4)
   0.94.3

Correcting the target to 0.94.3.
Please make sure that when you commit a change the jira is marked for the 
correct branch/point-release; otherwise the generated release-notes will be 
incorrect.

> TestAssignmentManager sometimes fails
> -
>
> Key: HBASE-6958
> URL: https://issues.apache.org/jira/browse/HBASE-6958
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Ted Yu
>Assignee: Jimmy Xiang
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 6958_0.94.patch, trunk-6958.patch
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/3432/testReport/junit/org.apache.hadoop.hbase.master/TestAssignmentManager/testBalanceOnMasterFailoverScenarioWithOpenedNode/
>  :
> {code}
> Stacktrace
> java.lang.Exception: test timed out after 5000 milliseconds
>   at java.lang.System.arraycopy(Native Method)
>   at java.lang.ThreadGroup.remove(ThreadGroup.java:969)
>   at java.lang.ThreadGroup.threadTerminated(ThreadGroup.java:942)
>   at java.lang.Thread.exit(Thread.java:732)
> ...
> 2012-10-06 00:46:12,521 DEBUG [MASTER_CLOSE_REGION-mockedAMExecutor-0] 
> zookeeper.ZKUtil(1141): mockedServer-0x13a33892de7000e Retrieved 81 byte(s) 
> of data from znode /hbase/unassigned/dc01abf9cd7fd0ea256af4df02811640 and set 
> watcher; region=t,,1349484359011.dc01abf9cd7fd0ea256af4df02811640., 
> state=M_ZK_REGION_OFFLINE, servername=master,1,1, createTime=1349484372509, 
> payload.length=0
> 2012-10-06 00:46:12,522 ERROR [MASTER_CLOSE_REGION-mockedAMExecutor-0] 
> executor.EventHandler(205): Caught throwable while processing event 
> RS_ZK_REGION_CLOSED
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.TestAssignmentManager$MockedLoadBalancer.randomAssignment(TestAssignmentManager.java:773)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.getRegionPlan(AssignmentManager.java:1709)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.getRegionPlan(AssignmentManager.java:1666)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1435)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1155)
>   at 
> org.apache.hadoop.hbase.master.TestAssignmentManager$AssignmentManagerWithExtrasForTesting.assign(TestAssignmentManager.java:1035)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1130)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1125)
>   at 
> org.apache.hadoop.hbase.master.handler.ClosedRegionHandler.process(ClosedRegionHandler.java:106)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>   at java.lang.Thread.run(Thread.java:722)
> 2012-10-06 00:46:12,522 DEBUG [pool-1-thread-1-EventThread] 
> master.AssignmentManager(670): Handling transition=M_ZK_REGION_OFFLINE, 
> server=master,1,1, region=dc01abf9cd7fd0ea256af4df02811640, current state 
> from region state map ={t,,1349484359011.dc01abf9cd7fd0ea256af4df02811640. 
> state=OFFLINE, ts=1349484372508, server=null}
> {code}
> Looks like NPE happened on this line:
> {code}
>   this.gate.set(true);
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7010) PrefixFilter should seek to first matching row

2012-11-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-7010:
-

Attachment: 7010-experimental.txt

This patch fixes the issue:
# Short circuits the MUST_PASS_ONE case just like it is done in the 
MUST_PASS_ALL case.
# filter.filterRow() is called *after* filterKeyValue in the normal flow of 
things. This makes the test do the same where it matters.

I am quite skeptical about this.

> PrefixFilter should seek to first matching row
> --
>
> Key: HBASE-7010
> URL: https://issues.apache.org/jira/browse/HBASE-7010
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7010-experimental.txt, 7010.txt
>
>
> Currently a PrefixFilter will happily scan all KVs < prefix.
> If should seek forward to the prefix if the current KV < prefix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7135) Serializing hfileBlcok is incorrect for SlabCache

2012-11-08 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493761#comment-13493761
 ] 

ramkrishna.s.vasudevan commented on HBASE-7135:
---

@Chunhui
HBASE-6048 can you check?  Is it also because of some similar reason? Just 
saying... if not related never mind.


> Serializing hfileBlcok is incorrect for SlabCache
> -
>
> Key: HBASE-7135
> URL: https://issues.apache.org/jira/browse/HBASE-7135
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.96.0
>
> Attachments: HBASE-7135.patch, HBASE-7135v2.patch
>
>
> 2012-11-07 08:35:36,082 ERROR 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache: Deserializer threw an 
> exception. This may indicate a bug.
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:153)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:164)
> at org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:254)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:148)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:140)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache.getBlock(SingleSizeCache.java:166)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SlabCache.getBlock(SlabCache.java:245)
> at 
> org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:100)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getBlockFromCache(HFileReaderV2.java:267)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:349)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:257)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:498)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:522)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7010) PrefixFilter should seek to first matching row

2012-11-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493760#comment-13493760
 ] 

Lars Hofhansl commented on HBASE-7010:
--

The tricky is in FilterList.filterRowKey vs FilterList.filterKeyValue.

Notice that FilterList.filterRowKey short circuits: If one filter passed in 
MUST_PASS_ONE mode or does not pass in MUST_PASS_ALL mode that method returns 
immediately without giving the other filters a chance to be evaluated.

FilterList.filterKeyValue does not short circuit... In the MUST_PASS_ONE case, 
but it does in the MUST_PASS_ALL case. W. T. F.?!

So now that PrefixFilter has the filterKeyValue method implemented the behavior 
is subtly different. (TestFilterList uses a PageFilter and PrefixFilter wrapped 
in a WhileMatchFilter).


> PrefixFilter should seek to first matching row
> --
>
> Key: HBASE-7010
> URL: https://issues.apache.org/jira/browse/HBASE-7010
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7010.txt
>
>
> Currently a PrefixFilter will happily scan all KVs < prefix.
> If should seek forward to the prefix if the current KV < prefix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7010) PrefixFilter should seek to first matching row

2012-11-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493755#comment-13493755
 ] 

Lars Hofhansl commented on HBASE-7010:
--

The stuff in TestFilterList is pretty tricky. I even believe that the test 
incorrect.

> PrefixFilter should seek to first matching row
> --
>
> Key: HBASE-7010
> URL: https://issues.apache.org/jira/browse/HBASE-7010
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7010.txt
>
>
> Currently a PrefixFilter will happily scan all KVs < prefix.
> If should seek forward to the prefix if the current KV < prefix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7135) Serializing hfileBlcok is incorrect for SlabCache

2012-11-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493747#comment-13493747
 ] 

Ted Yu commented on HBASE-7135:
---

+1 on patch.

> Serializing hfileBlcok is incorrect for SlabCache
> -
>
> Key: HBASE-7135
> URL: https://issues.apache.org/jira/browse/HBASE-7135
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.96.0
>
> Attachments: HBASE-7135.patch, HBASE-7135v2.patch
>
>
> 2012-11-07 08:35:36,082 ERROR 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache: Deserializer threw an 
> exception. This may indicate a bug.
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:153)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:164)
> at org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:254)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:148)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:140)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache.getBlock(SingleSizeCache.java:166)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SlabCache.getBlock(SlabCache.java:245)
> at 
> org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:100)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getBlockFromCache(HFileReaderV2.java:267)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:349)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:257)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:498)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:522)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7046) Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined

2012-11-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7046:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined
> -
>
> Key: HBASE-7046
> URL: https://issues.apache.org/jira/browse/HBASE-7046
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 0.96.0
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
> Fix For: 0.96.0
>
> Attachments: HBASE-7046.patch
>
>
> This method creates a writer but never closes one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7046) Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined

2012-11-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493743#comment-13493743
 ] 

Ted Yu commented on HBASE-7046:
---

Integrated to trunk.

Thanks for the patch, Himanshu.

Thanks for the review, Ram.

> Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined
> -
>
> Key: HBASE-7046
> URL: https://issues.apache.org/jira/browse/HBASE-7046
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 0.96.0
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
> Fix For: 0.96.0
>
> Attachments: HBASE-7046.patch
>
>
> This method creates a writer but never closes one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7103) Need to fail split if SPLIT znode is deleted even before the split is completed.

2012-11-08 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493741#comment-13493741
 ] 

ramkrishna.s.vasudevan commented on HBASE-7103:
---

Ok.. now i dont have the code with me.  Let me check the code and comment on 
this.  Thanks Lars and Stack.

> Need to fail split if SPLIT znode is deleted even before the split is 
> completed.
> 
>
> Key: HBASE-7103
> URL: https://issues.apache.org/jira/browse/HBASE-7103
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.94.3, 0.96.0
>
> Attachments: HBASE-7103_testcase.patch
>
>
> This came up after the following mail in dev list
> 'infinite loop of RS_ZK_REGION_SPLIT on .94.2'.
> The following is the reason for the problem
> The following steps happen
> -> Initially the parent region P1 starts splitting.
> -> The split is going on normally.
> -> Another split starts at the same time for the same region P1. (Not sure 
> why this started).
> -> Rollback happens seeing an already existing node.
> -> This node gets deleted in rollback and nodeDeleted Event starts.
> -> In nodeDeleted event the RIT for the region P1 gets deleted.
> -> Because of this there is no region in RIT.
> -> Now the first split gets over.  Here the problem is we try to transit the 
> node to SPLITTING to SPLIT. But the node even does not exist.
> But we don take any action on this.  We think it is successful.
> -> Because of this SplitRegionHandler never gets invoked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7135) Serializing hfileBlcok is incorrect for SlabCache

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493742#comment-13493742
 ] 

Hadoop QA commented on HBASE-7135:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552764/HBASE-7135v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3283//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3283//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3283//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3283//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3283//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3283//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3283//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3283//console

This message is automatically generated.

> Serializing hfileBlcok is incorrect for SlabCache
> -
>
> Key: HBASE-7135
> URL: https://issues.apache.org/jira/browse/HBASE-7135
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.96.0
>
> Attachments: HBASE-7135.patch, HBASE-7135v2.patch
>
>
> 2012-11-07 08:35:36,082 ERROR 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache: Deserializer threw an 
> exception. This may indicate a bug.
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:153)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:164)
> at org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:254)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:148)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:140)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache.getBlock(SingleSizeCache.java:166)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SlabCache.getBlock(SlabCache.java:245)
> at 
> org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:100)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getBlockFromCache(HFileReaderV2.java:267)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:349)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:257)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:498)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:522)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7046) Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined

2012-11-08 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493740#comment-13493740
 ] 

ramkrishna.s.vasudevan commented on HBASE-7046:
---

+1

> Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined
> -
>
> Key: HBASE-7046
> URL: https://issues.apache.org/jira/browse/HBASE-7046
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 0.96.0
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
> Fix For: 0.96.0
>
> Attachments: HBASE-7046.patch
>
>
> This method creates a writer but never closes one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7128) Reduce annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493733#comment-13493733
 ] 

Hadoop QA commented on HBASE-7128:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552763/HBASE-7128-V2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 21 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3282//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3282//console

This message is automatically generated.

> Reduce annoying catch clauses of UnsupportedEncodingException that is never 
> thrown because of UTF-8
> ---
>
> Key: HBASE-7128
> URL: https://issues.apache.org/jira/browse/HBASE-7128
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Priority: Trivial
> Fix For: 0.96.0
>
> Attachments: HBASE-7128.patch, HBASE-7128-V2.patch
>
>
> There are some codes that catch UnsupportedEncodingException, and log or 
> ignore it because Java always supports UTF-8 (see the javadoc of Charset).
> The catch clauses are annoying, and they should be replaced by methods of 
> Bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493731#comment-13493731
 ] 

Hadoop QA commented on HBASE-7130:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552768/trunk-7130_v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor
  org.apache.hadoop.hbase.client.TestFromClientSide

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3281//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3281//console

This message is automatically generated.

> NULL qualifier is ignored
> -
>
> Key: HBASE-7130
> URL: https://issues.apache.org/jira/browse/HBASE-7130
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs
>Affects Versions: 0.96.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.96.0
>
> Attachments: trunk-7130.patch, trunk-7130_v2.patch
>
>
> HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
> the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7135) Serializing hfileBlcok is incorrect for SlabCache

2012-11-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7135:
--

Status: Patch Available  (was: Open)

> Serializing hfileBlcok is incorrect for SlabCache
> -
>
> Key: HBASE-7135
> URL: https://issues.apache.org/jira/browse/HBASE-7135
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.96.0
>
> Attachments: HBASE-7135.patch, HBASE-7135v2.patch
>
>
> 2012-11-07 08:35:36,082 ERROR 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache: Deserializer threw an 
> exception. This may indicate a bug.
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:153)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:164)
> at org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:254)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:148)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:140)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache.getBlock(SingleSizeCache.java:166)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SlabCache.getBlock(SlabCache.java:245)
> at 
> org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:100)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getBlockFromCache(HFileReaderV2.java:267)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:349)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:257)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:498)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:522)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493728#comment-13493728
 ] 

Ted Yu commented on HBASE-7130:
---

I saw the following test failure for patch v2:
{code}
Failed tests:   
testScan_NullQualifier(org.apache.hadoop.hbase.client.TestFromClientSide): 
expected:<2> but was:<1>
{code}

> NULL qualifier is ignored
> -
>
> Key: HBASE-7130
> URL: https://issues.apache.org/jira/browse/HBASE-7130
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs
>Affects Versions: 0.96.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.96.0
>
> Attachments: trunk-7130.patch, trunk-7130_v2.patch
>
>
> HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
> the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7130:
---

Status: Patch Available  (was: Open)

Fixed TestAggregateProtocol failure.

> NULL qualifier is ignored
> -
>
> Key: HBASE-7130
> URL: https://issues.apache.org/jira/browse/HBASE-7130
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs
>Affects Versions: 0.96.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.96.0
>
> Attachments: trunk-7130.patch, trunk-7130_v2.patch
>
>
> HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
> the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7128) Reduce annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8

2012-11-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7128:
--

 Summary: Reduce annoying catch clauses of UnsupportedEncodingException 
that is never thrown because of UTF-8  (was: Reduced annoying catch clauses of 
UnsupportedEncodingException that is never thrown because of UTF-8)
Hadoop Flags: Reviewed

Patch v2 looks good.

> Reduce annoying catch clauses of UnsupportedEncodingException that is never 
> thrown because of UTF-8
> ---
>
> Key: HBASE-7128
> URL: https://issues.apache.org/jira/browse/HBASE-7128
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Priority: Trivial
> Fix For: 0.96.0
>
> Attachments: HBASE-7128.patch, HBASE-7128-V2.patch
>
>
> There are some codes that catch UnsupportedEncodingException, and log or 
> ignore it because Java always supports UTF-8 (see the javadoc of Charset).
> The catch clauses are annoying, and they should be replaced by methods of 
> Bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7130:
---

Attachment: trunk-7130_v2.patch

> NULL qualifier is ignored
> -
>
> Key: HBASE-7130
> URL: https://issues.apache.org/jira/browse/HBASE-7130
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs
>Affects Versions: 0.96.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.96.0
>
> Attachments: trunk-7130.patch, trunk-7130_v2.patch
>
>
> HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
> the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7130:
---

Status: Open  (was: Patch Available)

> NULL qualifier is ignored
> -
>
> Key: HBASE-7130
> URL: https://issues.apache.org/jira/browse/HBASE-7130
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs
>Affects Versions: 0.96.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.96.0
>
> Attachments: trunk-7130.patch
>
>
> HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
> the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2012-11-08 Thread liang xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493707#comment-13493707
 ] 

liang xie commented on HBASE-5954:
--

Hi [~lhofhansl],the HDFS-3979 has been committed, so maybe we can have a more 
clear target/fix version plan on HBASE-5954 now, right?

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Attachments: 5954-trunk-hdfs-trunk.txt, 5954-trunk-hdfs-trunk-v2.txt, 
> 5954-trunk-hdfs-trunk-v3.txt, 5954-trunk-hdfs-trunk-v4.txt, 
> 5954-trunk-hdfs-trunk-v5.txt, 5954-trunk-hdfs-trunk-v6.txt, hbase-hdfs-744.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7135) Serializing hfileBlcok is incorrect for SlabCache

2012-11-08 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-7135:


Attachment: HBASE-7135v2.patch

Shouldn't use MINOR_VERSION_NO_CHECKSUM as default when deserialize

> Serializing hfileBlcok is incorrect for SlabCache
> -
>
> Key: HBASE-7135
> URL: https://issues.apache.org/jira/browse/HBASE-7135
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.96.0
>
> Attachments: HBASE-7135.patch, HBASE-7135v2.patch
>
>
> 2012-11-07 08:35:36,082 ERROR 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache: Deserializer threw an 
> exception. This may indicate a bug.
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:153)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:164)
> at org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:254)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:148)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:140)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache.getBlock(SingleSizeCache.java:166)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SlabCache.getBlock(SlabCache.java:245)
> at 
> org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:100)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getBlockFromCache(HFileReaderV2.java:267)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:349)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:257)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:498)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:522)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7128) Reduced annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8

2012-11-08 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-7128:
-

Attachment: HBASE-7128-V2.patch

Added a revised patch.

Fixed the code and reversed the dependency of the previous patch (The current 
dependency is: HConstants <- Bytes).

Now UTF8_ENCODING and UTF8_CHARSET only exist in HConstants, and Bytes refers 
to them.

Thanks for reviews.

> Reduced annoying catch clauses of UnsupportedEncodingException that is never 
> thrown because of UTF-8
> 
>
> Key: HBASE-7128
> URL: https://issues.apache.org/jira/browse/HBASE-7128
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Priority: Trivial
> Fix For: 0.96.0
>
> Attachments: HBASE-7128.patch, HBASE-7128-V2.patch
>
>
> There are some codes that catch UnsupportedEncodingException, and log or 
> ignore it because Java always supports UTF-8 (see the javadoc of Charset).
> The catch clauses are annoying, and they should be replaced by methods of 
> Bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-7136) SchemaMetrics make SlabCache unavailable

2012-11-08 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen resolved HBASE-7136.
-

  Resolution: Won't Fix
Release Note: LruBlockCache not update schemaMetrics now In  the Trunk...

> SchemaMetrics make SlabCache unavailable
> 
>
> Key: HBASE-7136
> URL: https://issues.apache.org/jira/browse/HBASE-7136
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2
>Reporter: chunhui shen
>Assignee: chunhui shen
>
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.lang.IllegalStateException: Schema metrics requested before table/CF 
> name initialization: {"tableName":"null","cfName":"null"}
> at 
> org.apache.hadoop.hbase.regionserver.metrics.SchemaConfigured.getSchemaMetrics(SchemaConfigured.java:182)
> at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.updateSizeMetrics(LruBlockCache.java:310)
> at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:274)
> at 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:293)
> at 
> org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:102)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getBlockFromCache(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:348)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:587)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:996)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:233)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:351)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:333)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:291)
> When we put the hfile block to SlabCache, it will drop the SchemaMetrics, 
> howerver, if we cache this block to LruBlockCache, it will throw above 
> exception 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7136) SchemaMetrics make SlabCache unavailable

2012-11-08 Thread chunhui shen (JIRA)
chunhui shen created HBASE-7136:
---

 Summary: SchemaMetrics make SlabCache unavailable
 Key: HBASE-7136
 URL: https://issues.apache.org/jira/browse/HBASE-7136
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.2
Reporter: chunhui shen
Assignee: chunhui shen


ERROR org.apache.hadoop.hbase.regionserver.HRegionServer:
java.lang.IllegalStateException: Schema metrics requested before table/CF name 
initialization: {"tableName":"null","cfName":"null"}
at 
org.apache.hadoop.hbase.regionserver.metrics.SchemaConfigured.getSchemaMetrics(SchemaConfigured.java:182)
at 
org.apache.hadoop.hbase.io.hfile.LruBlockCache.updateSizeMetrics(LruBlockCache.java:310)
at 
org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:274)
at 
org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:293)
at 
org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:102)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getBlockFromCache(HFileReaderV2.java:266)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:348)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:587)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:996)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:233)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:351)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:333)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:291)


When we put the hfile block to SlabCache, it will drop the SchemaMetrics, 
howerver, if we cache this block to LruBlockCache, it will throw above 
exception 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7135) Serializing hfileBlcok is incorrect for SlabCache

2012-11-08 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-7135:


Attachment: HBASE-7135.patch

Rewind the duplicate buffer before put to destination

> Serializing hfileBlcok is incorrect for SlabCache
> -
>
> Key: HBASE-7135
> URL: https://issues.apache.org/jira/browse/HBASE-7135
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.96.0
>
> Attachments: HBASE-7135.patch
>
>
> 2012-11-07 08:35:36,082 ERROR 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache: Deserializer threw an 
> exception. This may indicate a bug.
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:153)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:164)
> at org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:254)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:148)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:140)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache.getBlock(SingleSizeCache.java:166)
> at 
> org.apache.hadoop.hbase.io.hfile.slab.SlabCache.getBlock(SlabCache.java:245)
> at 
> org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:100)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getBlockFromCache(HFileReaderV2.java:267)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:349)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:257)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:498)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:522)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7135) Serializing hfileBlcok is incorrect for SlabCache

2012-11-08 Thread chunhui shen (JIRA)
chunhui shen created HBASE-7135:
---

 Summary: Serializing hfileBlcok is incorrect for SlabCache
 Key: HBASE-7135
 URL: https://issues.apache.org/jira/browse/HBASE-7135
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.2
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0


2012-11-07 08:35:36,082 ERROR 
org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache: Deserializer threw an 
exception. This may indicate a bug.
java.io.IOException: Invalid HFile block magic: \x00\x00\x00\x00\x00\x00\x00\x00
at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:153)
at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:164)
at org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:254)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:148)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:140)
at 
org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache.getBlock(SingleSizeCache.java:166)
at org.apache.hadoop.hbase.io.hfile.slab.SlabCache.getBlock(SlabCache.java:245)
at 
org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:100)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getBlockFromCache(HFileReaderV2.java:267)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:349)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:257)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:498)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:522)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7134) incrementColumnValue hooks no longer called from anywhere

2012-11-08 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7134:
--

Attachment: HBASE-7134.patch

Attached patch removes dead code from RegionCoprocessorHost and deprecates the 
affected coprocessor hooks in the API. 

A better option might be to remove the hooks rather than deprecate, it would 
avoid collecting cruft in the list of hooks, but would break any existing 
coprocessors that overrode this hook from BaseRegionObserver.

> incrementColumnValue hooks no longer called from anywhere
> -
>
> Key: HBASE-7134
> URL: https://issues.apache.org/jira/browse/HBASE-7134
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.96.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: HBASE-7134.patch
>
>
> incrementColumnValue has been removed from RegionServer, the corresponding 
> coprocessor hooks for this operation are no longer called.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7134) incrementColumnValue hooks no longer called from anywhere

2012-11-08 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7134:
--

Description: incrementColumnValue has been removed from RegionServer, the 
corresponding coprocessor hooks for this operation are no longer called.

> incrementColumnValue hooks no longer called from anywhere
> -
>
> Key: HBASE-7134
> URL: https://issues.apache.org/jira/browse/HBASE-7134
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.96.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>
> incrementColumnValue has been removed from RegionServer, the corresponding 
> coprocessor hooks for this operation are no longer called.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7134) incrementColumnValue hooks no longer called from anywhere

2012-11-08 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-7134:
-

 Summary: incrementColumnValue hooks no longer called from anywhere
 Key: HBASE-7134
 URL: https://issues.apache.org/jira/browse/HBASE-7134
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7062) Move HLog stats to metrics 2

2012-11-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493685#comment-13493685
 ] 

Ted Yu commented on HBASE-7062:
---

MetricsHLogSource.java and TestMetricsHLogSource.java are missing license 
header.
{code}
+public interface MetricsHLogSource extends BaseSource {
{code}
Add annotation for audience. Same with MetricsHLogSourceImpl.java
{code}
+   * Add the time it to to append to a histogram.
+   */
+  void incrementAppendTime(long time);
{code}
I guess what you wanted to say was 'it took to'


> Move HLog stats to metrics 2
> 
>
> Key: HBASE-7062
> URL: https://issues.apache.org/jira/browse/HBASE-7062
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0
>
> Attachments: HBASE-7062-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7133) svn:ignore on module directories

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-7133:
-

Attachment: hbase-7133.patch

Trivial patch. 

> svn:ignore on module directories
> 
>
> Key: HBASE-7133
> URL: https://issues.apache.org/jira/browse/HBASE-7133
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Trivial
> Attachments: hbase-7133.patch
>
>
> This has been bothering me whenever I go back to svn to commit smt. We have 
> to set svn:ignore on module directories hbase-common,hbase-server,etc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7133) svn:ignore on module directories

2012-11-08 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-7133:


 Summary: svn:ignore on module directories
 Key: HBASE-7133
 URL: https://issues.apache.org/jira/browse/HBASE-7133
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Trivial
 Attachments: hbase-7133.patch

This has been bothering me whenever I go back to svn to commit smt. We have to 
set svn:ignore on module directories hbase-common,hbase-server,etc. 



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-7132) region_mover.rb should not require FS configurations to be set

2012-11-08 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved HBASE-7132.
---

Resolution: Duplicate

> region_mover.rb should not require FS configurations to be set
> --
>
> Key: HBASE-7132
> URL: https://issues.apache.org/jira/browse/HBASE-7132
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
>
> On the 90-branch, you can run region_mover.rb without having FS-related 
> configurations set (i.e. fs.defaultFS, fs.default.name, hbase.rootdir).
> This is not the case against 0.92+.  The reason is that region_mover.rb calls:
> {code}
> r.getTableDesc().getName()
> {code}
> where r is an HRegionInfo.  In 0.92+ this actually reads off the filesystem, 
> which is unnecessary to just get the table name.
> I think copy_table.rb has the same issue, but haven't looked into it enough.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7132) region_mover.rb should not require FS configurations to be set

2012-11-08 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493679#comment-13493679
 ] 

Gregory Chanan commented on HBASE-7132:
---

Ah, appears to be a duplicate of HBASE-6927.

> region_mover.rb should not require FS configurations to be set
> --
>
> Key: HBASE-7132
> URL: https://issues.apache.org/jira/browse/HBASE-7132
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
>
> On the 90-branch, you can run region_mover.rb without having FS-related 
> configurations set (i.e. fs.defaultFS, fs.default.name, hbase.rootdir).
> This is not the case against 0.92+.  The reason is that region_mover.rb calls:
> {code}
> r.getTableDesc().getName()
> {code}
> where r is an HRegionInfo.  In 0.92+ this actually reads off the filesystem, 
> which is unnecessary to just get the table name.
> I think copy_table.rb has the same issue, but haven't looked into it enough.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7132) region_mover.rb should not require FS configurations to be set

2012-11-08 Thread Gregory Chanan (JIRA)
Gregory Chanan created HBASE-7132:
-

 Summary: region_mover.rb should not require FS configurations to 
be set
 Key: HBASE-7132
 URL: https://issues.apache.org/jira/browse/HBASE-7132
 Project: HBase
  Issue Type: Bug
  Components: scripts
Reporter: Gregory Chanan
Assignee: Gregory Chanan
Priority: Minor
 Fix For: 0.96.0, 0.94.4


On the 90-branch, you can run region_mover.rb without having FS-related 
configurations set (i.e. fs.defaultFS, fs.default.name, hbase.rootdir).

This is not the case against 0.92+.  The reason is that region_mover.rb calls:
{code}
r.getTableDesc().getName()
{code}

where r is an HRegionInfo.  In 0.92+ this actually reads off the filesystem, 
which is unnecessary to just get the table name.

I think copy_table.rb has the same issue, but haven't looked into it enough.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7062) Move HLog stats to metrics 2

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493677#comment-13493677
 ] 

Hadoop QA commented on HBASE-7062:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552745/HBASE-7062-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3280//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3280//console

This message is automatically generated.

> Move HLog stats to metrics 2
> 
>
> Key: HBASE-7062
> URL: https://issues.apache.org/jira/browse/HBASE-7062
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0
>
> Attachments: HBASE-7062-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7131) Race condition after table is re-enabled: regions are incorrectly reported as being available.

2012-11-08 Thread Aleksandr Shulman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Shulman updated HBASE-7131:
-

Attachment: HBase-7131-v1.patch

Test to verify the fix. Right now it is flaky (which demonstrates the bug).

> Race condition after table is re-enabled: regions are incorrectly reported as 
> being available.
> --
>
> Key: HBASE-7131
> URL: https://issues.apache.org/jira/browse/HBASE-7131
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.96.0
>Reporter: Aleksandr Shulman
>Assignee: Jimmy Xiang
> Attachments: HBase-7131-v1.patch
>
>
> For a table that is re-enabled shortly after it is disabled, regions that are 
> reported to be online are not. This is manifested by a flush attempt throwing 
> a NotServingRegion exception despite all regions from the original table 
> reporting that they are online.
> I have a test in place that verifies this flaky behavior. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7131) Race condition after table is re-enabled: regions are incorrectly reported as being available.

2012-11-08 Thread Aleksandr Shulman (JIRA)
Aleksandr Shulman created HBASE-7131:


 Summary: Race condition after table is re-enabled: regions are 
incorrectly reported as being available.
 Key: HBASE-7131
 URL: https://issues.apache.org/jira/browse/HBASE-7131
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 0.96.0
Reporter: Aleksandr Shulman
Assignee: Jimmy Xiang


For a table that is re-enabled shortly after it is disabled, regions that are 
reported to be online are not. This is manifested by a flush attempt throwing a 
NotServingRegion exception despite all regions from the original table 
reporting that they are online.

I have a test in place that verifies this flaky behavior. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7062) Move HLog stats to metrics 2

2012-11-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7062:
-

Attachment: HBASE-7062-1.patch

Re-submitting. Seems like HadoopQA missed this one.

> Move HLog stats to metrics 2
> 
>
> Key: HBASE-7062
> URL: https://issues.apache.org/jira/browse/HBASE-7062
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0
>
> Attachments: HBASE-7062-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7062) Move HLog stats to metrics 2

2012-11-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7062:
-

Attachment: (was: HBASE-7062-0.patch)

> Move HLog stats to metrics 2
> 
>
> Key: HBASE-7062
> URL: https://issues.apache.org/jira/browse/HBASE-7062
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7122) Proper warning message when opening a log file with no entries (idle cluster)

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493645#comment-13493645
 ] 

Hadoop QA commented on HBASE-7122:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552737/HBase-7122.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3279//console

This message is automatically generated.

> Proper warning message when opening a log file with no entries (idle cluster)
> -
>
> Key: HBASE-7122
> URL: https://issues.apache.org/jira/browse/HBASE-7122
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 0.94.2
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
> Fix For: 0.96.0
>
> Attachments: HBase-7122.patch
>
>
> In case the cluster is idle and the log has rolled (offset to 0), 
> replicationSource tries to open the log and gets an EOF exception. This gets 
> printed after every 10 sec until an entry is inserted in it.
> {code}
> 2012-11-07 15:47:40,924 DEBUG regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(487)) - Opening log for replication 
> c0315.hal.cloudera.com%2C40020%2C1352324202860.1352327804874 at 0
> 2012-11-07 15:47:40,926 WARN  regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(543)) - 1 Got: 
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at java.io.DataInputStream.readFully(DataInputStream.java:152)
>   at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:55)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:175)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:716)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:491)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:290)
> 2012-11-07 15:47:40,927 WARN  regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(547)) - Waited too long for this file, 
> considering dumping
> 2012-11-07 15:47:40,927 DEBUG regionserver.ReplicationSource 
> (ReplicationSource.java:sleepForRetries(562)) - Unable to open a reader, 
> sleeping 1000 times 10
> {code}
> We should reduce the log spewing in this case (or some informative message, 
> based on the offset).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7122) Proper warning message when opening a log file with no entries (idle cluster)

2012-11-08 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-7122:
---

Status: Patch Available  (was: Open)

> Proper warning message when opening a log file with no entries (idle cluster)
> -
>
> Key: HBASE-7122
> URL: https://issues.apache.org/jira/browse/HBASE-7122
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 0.94.2
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
> Fix For: 0.96.0
>
> Attachments: HBase-7122.patch
>
>
> In case the cluster is idle and the log has rolled (offset to 0), 
> replicationSource tries to open the log and gets an EOF exception. This gets 
> printed after every 10 sec until an entry is inserted in it.
> {code}
> 2012-11-07 15:47:40,924 DEBUG regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(487)) - Opening log for replication 
> c0315.hal.cloudera.com%2C40020%2C1352324202860.1352327804874 at 0
> 2012-11-07 15:47:40,926 WARN  regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(543)) - 1 Got: 
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at java.io.DataInputStream.readFully(DataInputStream.java:152)
>   at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:55)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:175)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:716)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:491)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:290)
> 2012-11-07 15:47:40,927 WARN  regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(547)) - Waited too long for this file, 
> considering dumping
> 2012-11-07 15:47:40,927 DEBUG regionserver.ReplicationSource 
> (ReplicationSource.java:sleepForRetries(562)) - Unable to open a reader, 
> sleeping 1000 times 10
> {code}
> We should reduce the log spewing in this case (or some informative message, 
> based on the offset).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6466) Enable multi-thread for memstore flush

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493641#comment-13493641
 ] 

Hadoop QA commented on HBASE-6466:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552728/HBASE-6466v3.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3277//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3277//console

This message is automatically generated.

> Enable multi-thread for memstore flush
> --
>
> Key: HBASE-6466
> URL: https://issues.apache.org/jira/browse/HBASE-6466
> Project: HBase
>  Issue Type: Improvement
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-6466.patch, HBASE-6466v2.patch, 
> HBASE-6466v3.1.patch, HBASE-6466v3.patch
>
>
> If the KV is large or Hlog is closed with high-pressure putting, we found 
> memstore is often above the high water mark and block the putting.
> So should we enable multi-thread for Memstore Flush?
> Some performance test data for reference,
> 1.test environment : 
> random writting;upper memstore limit 5.6GB;lower memstore limit 4.8GB;400 
> regions per regionserver;row len=50 bytes, value len=1024 bytes;5 
> regionserver, 300 ipc handler per regionserver;5 client, 50 thread handler 
> per client for writing
> 2.test results:
> one cacheFlush handler, tps: 7.8k/s per regionserver, Flush:10.1MB/s per 
> regionserver, appears many aboveGlobalMemstoreLimit blocking
> two cacheFlush handlers, tps: 10.7k/s per regionserver, Flush:12.46MB/s per 
> regionserver,
> 200 thread handler per client & two cacheFlush handlers, tps:16.1k/s per 
> regionserver, Flush:18.6MB/s per regionserver

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7062) Move HLog stats to metrics 2

2012-11-08 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493633#comment-13493633
 ] 

Elliott Clark commented on HBASE-7062:
--

The new metrics will show up like: http://i.imgur.com/Qru4e.png

> Move HLog stats to metrics 2
> 
>
> Key: HBASE-7062
> URL: https://issues.apache.org/jira/browse/HBASE-7062
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0
>
> Attachments: HBASE-7062-0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7062) Move HLog stats to metrics 2

2012-11-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7062:
-

Affects Version/s: 0.96.0
   Status: Patch Available  (was: Open)

> Move HLog stats to metrics 2
> 
>
> Key: HBASE-7062
> URL: https://issues.apache.org/jira/browse/HBASE-7062
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0
>
> Attachments: HBASE-7062-0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7062) Move HLog stats to metrics 2

2012-11-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7062:
-

Attachment: HBASE-7062-0.patch

Add the first pass at HLog metrics in metrics 2.

> Move HLog stats to metrics 2
> 
>
> Key: HBASE-7062
> URL: https://issues.apache.org/jira/browse/HBASE-7062
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0
>
> Attachments: HBASE-7062-0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6827) [WINDOWS] TestScannerTimeout fails expecting a timeout

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493628#comment-13493628
 ] 

Hudson commented on HBASE-6827:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #253 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/253/])
HBASE-6827. [WINDOWS] TestScannerTimeout fails expecting a timeout 
(Revision 1407290)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java


> [WINDOWS] TestScannerTimeout fails expecting a timeout
> --
>
> Key: HBASE-6827
> URL: https://issues.apache.org/jira/browse/HBASE-6827
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.96.0
>
> Attachments: hbase-6827_v1-0.94.patch, hbase-6827_v1-trunk.patch
>
>
> TestScannerTimeout.test2481() fails with:
> {code}
> java.lang.AssertionError: We should be timing out
>   at org.junit.Assert.fail(Assert.java:93)
>   at 
> org.apache.hadoop.hbase.client.TestScannerTimeout.test2481(TestScannerTimeout.java:117)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493627#comment-13493627
 ] 

Hudson commented on HBASE-4913:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #253 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/253/])
HBASE-4913 Per-CF compaction Via the Shell (Mubarak and Gregory) (Revision 
1407227)

 Result = FAILURE
gchanan : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* /hbase/trunk/hbase-server/src/main/protobuf/Admin.proto
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/compact.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/major_compact.rb
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionState.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java


> Per-CF compaction Via the Shell
> ---
>
> Key: HBASE-4913
> URL: https://issues.apache.org/jira/browse/HBASE-4913
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, regionserver
>Reporter: Nicolas Spiegelberg
>Assignee: Mubarak Seyed
> Fix For: 0.96.0, 0.94.4
>
> Attachments: HBASE-4913-94.patch, HBASE-4913-addendum.patch, 
> HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, 
> HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6826) [WINDOWS] TestFromClientSide failures

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493629#comment-13493629
 ] 

Hudson commented on HBASE-6826:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #253 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/253/])
HBASE-6826. [WINDOWS] TestFromClientSide failures (Revision 1407285)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> [WINDOWS] TestFromClientSide failures
> -
>
> Key: HBASE-6826
> URL: https://issues.apache.org/jira/browse/HBASE-6826
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6826_v1-0.94.patch, hbase-6826_v1-trunk.patch, 
> hbase-6826_v2-0.94.patch, hbase-6826_v2-trunk.patch
>
>
> The following tests fail for TestFromClientSide: 
> {code}
> testPoolBehavior()
> testClientPoolRoundRobin()
> testClientPoolThreadLocal()
> {code}
> The first test fails due to the fact that the test (wrongly) assumes that 
> ThredPoolExecutor can reclaim the thread immediately. 
> The second and third tests seem to fail because that Put's to the table does 
> not specify an explicit timestamp, but on windows, consecutive calls to put 
> happen to finish in the same milisecond so that the resulting mutations have 
> the same timestamp, thus there is only one version of the cell value.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6831) [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper session

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493625#comment-13493625
 ] 

Hudson commented on HBASE-6831:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #253 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/253/])
HBASE-6831. [WINDOWS] HBaseTestingUtility.expireSession() does not expire 
zookeeper session (Revision 1407300)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java


> [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper 
> session
> ---
>
> Key: HBASE-6831
> URL: https://issues.apache.org/jira/browse/HBASE-6831
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6831_v1-0.94.patch, hbase-6831_v1-trunk.patch
>
>
> TestReplicationPeer fails because it forces the zookeeper session expiration 
> by calling HBaseTestingUtilty.expireSesssion(), but that function fails to do 
> so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6828) [WINDOWS] TestMemoryBoundedLogMessageBuffer failures

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493626#comment-13493626
 ] 

Hudson commented on HBASE-6828:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #253 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/253/])
HBASE-6828. [WINDOWS] TestMemoryBoundedLogMessageBuffer failures (Revision 
1407298)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/monitoring/TestMemoryBoundedLogMessageBuffer.java


> [WINDOWS] TestMemoryBoundedLogMessageBuffer failures
> 
>
> Key: HBASE-6828
> URL: https://issues.apache.org/jira/browse/HBASE-6828
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6828_v1-0.94.patch, hbase-6828_v1-trunk.patch
>
>
> TestMemoryBoundedLogMessageBuffer fails because of a suspected \n line ending 
> difference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6820) [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon shutdown()

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493621#comment-13493621
 ] 

Hudson commented on HBASE-6820:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #253 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/253/])
HBASE-6820. [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is 
closed upon shutdown() (Revision 1407287)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java


> [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon 
> shutdown()
> --
>
> Key: HBASE-6820
> URL: https://issues.apache.org/jira/browse/HBASE-6820
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6820_v1-0.94.patch, hbase-6820_v1-trunk.patch
>
>
> MiniZookeeperCluster.shutdown() shuts down the ZookeeperServer and 
> NIOServerCnxnFactory. However, MiniZookeeperCluster uses a deprecated 
> ZookeeperServer constructor, which in turn constructs its own FileTxnSnapLog, 
> and ZKDatabase. Since ZookeeperServer.shutdown() does not close() the 
> ZKDatabase, we have to explicitly close it in MiniZookeeperCluster.shutdown().
> Tests effected by this are
> {code}
> TestSplitLogManager
> TestSplitLogWorker
> TestOfflineMetaRebuildBase
> TestOfflineMetaRebuildHole
> TestOfflineMetaRebuildOverlap
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6822) [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493623#comment-13493623
 ] 

Hudson commented on HBASE-6822:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #253 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/253/])
HBASE-6822. [WINDOWS] MiniZookeeperCluster multiple daemons bind to the 
same port (Revision 1407286)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java


> [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port
> -
>
> Key: HBASE-6822
> URL: https://issues.apache.org/jira/browse/HBASE-6822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.96.0
>
> Attachments: hbase-6822_v1-0.94.patch, hbase-6822_v1-trunk.patch
>
>
> TestHBaseTestingUtility.testMiniZooKeeper() tests whether the mini zk cluster 
> is working by launching 5 threads corresponding to zk servers. 
> NIOServerCnxnFactory.configure() configures the socket as:
> {code}
> this.ss = ServerSocketChannel.open();
> ss.socket().setReuseAddress(true);
> {code}
> setReuseAddress() is set, because it allows the server to come back up and 
> bind to the same port before the socket is timed-out by the kernel.
> Under windows, the behavior on ServerSocket.setReuseAddress() is different 
> than on linux, in which it allows any process to bind to an already-bound 
> port. This causes ZK nodes starting on the same node, to be able to bind to 
> the same port. 
> The following part of the patch at 
> https://issues.apache.org/jira/browse/HADOOP-8223 deals with this case for 
> Hadoop:
> {code}
> if(Shell.WINDOWS) {
> +  // result of setting the SO_REUSEADDR flag is different on Windows
> +  // http://msdn.microsoft.com/en-us/library/ms740621(v=vs.85).aspx
> +  // without this 2 NN's can start on the same machine and listen on 
> +  // the same port with indeterminate routing of incoming requests to 
> them
> +  ret.setReuseAddress(false);
> +}
> {code}
> We should do the same in Zookeeper (I'll open a ZOOK issue). But in the 
> meantime, we can fix hbase tests to not rely on BindException to resolve for 
> bind errors. Especially, in  MiniZKCluster.startup() when starting more than 
> 1 servers, we already know that we have to increment the port number. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6823) [WINDOWS] TestSplitTransaction fails due to the Log handle not released by a call to DaughterOpener.start()

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493622#comment-13493622
 ] 

Hudson commented on HBASE-6823:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #253 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/253/])
HBASE-6823. [WINDOWS] TestSplitTransaction fails due to the Log handle not 
released by a call to DaughterOpener.start() (Revision 1407289)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java


> [WINDOWS] TestSplitTransaction fails due to the Log handle not released by a 
> call to DaughterOpener.start()
> ---
>
> Key: HBASE-6823
> URL: https://issues.apache.org/jira/browse/HBASE-6823
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6823_v1-0.94.patch, hbase-6823_v1-trunk.patch, 
> hbase-6823_v2-0.94.patch, hbase-6823_v2-trunk.patch
>
>
> There are two unit test cases in HBase RegionServer test failed in the clean 
> up stage that failed to delete the files/folders created in the test. 
> testWholesomeSplit(org.apache.hadoop.hbase.regionserver.TestSplitTransaction):
>  Failed delete of ./target/test-
> data/1c386abc-f159-492e-b21f-e89fab24d85b/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/a588d813fd26280c2b42e93565ed960c
> testRollback(org.apache.hadoop.hbase.regionserver.TestSplitTransaction): 
> Failed delete of ./target/test-data/6
> 1a1a14b-0cc9-4dd6-93fd-4dc021e2bfcc/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/8090abc89528461fa284288c257662cd
> The root cause is triggered by ta call to the DaughterOpener.start() in 
> \src\hbase\src\main\java\org\apache\hadoop\hbase\regionserver\SplitTransactopn.Java
>  (openDaughters() function). It left handles to the splited folder/file and 
> causing deleting of the file/folder failed in the Windows OS.
> Windows does not allow to delete a file, while there are open file handlers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7121) Fix TestHFileOutputFormat after moving RS to metrics2

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493624#comment-13493624
 ] 

Hudson commented on HBASE-7121:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #253 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/253/])
HBASE-7121 Fix TestHFileOutputFormat after moving RS to metrics2 (Revision 
1407216)

 Result = FAILURE
eclark : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionWrapperImpl.java


> Fix TestHFileOutputFormat after moving RS to metrics2
> -
>
> Key: HBASE-7121
> URL: https://issues.apache.org/jira/browse/HBASE-7121
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 0.96.0
>
> Attachments: HBASE-7121-0.patch
>
>
> When spinning up lots of threads in a single jvm it's possible that the 
> metrics wrapper can touch variables that are not initialized.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7110) refactor the compaction selection and config code similarly to 0.89-fb changes

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493617#comment-13493617
 ] 

Hadoop QA commented on HBASE-7110:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12552718/HBASE-7110-v6-squashed.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
85 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 17 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.TestHeapSize

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//console

This message is automatically generated.

> refactor the compaction selection and config code similarly to 0.89-fb changes
> --
>
> Key: HBASE-7110
> URL: https://issues.apache.org/jira/browse/HBASE-7110
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.96.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-6371-v5-refactor-only-squashed.patch, 
> HBASE-7110-v6-squashed.patch
>
>
> Separate JIRA for refactoring changes from HBASE-7055 (and further ones after 
> code review)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-4583:
--

Attachment: (was: 4583-mixed-v3.txt)

> Integrate RWCC with Append and Increment operations
> ---
>
> Key: HBASE-4583
> URL: https://issues.apache.org/jira/browse/HBASE-4583
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.96.0
>
> Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v4.txt, 
> 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
> 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
> 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
> 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
> 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt
>
>
> Currently Increment and Append operations do not work with RWCC and hence a 
> client could see the results of multiple such operation mixed in the same 
> Get/Scan.
> The semantics might be a bit more interesting here as upsert adds and removes 
> to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6826) [WINDOWS] TestFromClientSide failures

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493610#comment-13493610
 ] 

Hudson commented on HBASE-6826:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6826. [WINDOWS] TestFromClientSide failures (Revision 1407285)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> [WINDOWS] TestFromClientSide failures
> -
>
> Key: HBASE-6826
> URL: https://issues.apache.org/jira/browse/HBASE-6826
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6826_v1-0.94.patch, hbase-6826_v1-trunk.patch, 
> hbase-6826_v2-0.94.patch, hbase-6826_v2-trunk.patch
>
>
> The following tests fail for TestFromClientSide: 
> {code}
> testPoolBehavior()
> testClientPoolRoundRobin()
> testClientPoolThreadLocal()
> {code}
> The first test fails due to the fact that the test (wrongly) assumes that 
> ThredPoolExecutor can reclaim the thread immediately. 
> The second and third tests seem to fail because that Put's to the table does 
> not specify an explicit timestamp, but on windows, consecutive calls to put 
> happen to finish in the same milisecond so that the resulting mutations have 
> the same timestamp, thus there is only one version of the cell value.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6828) [WINDOWS] TestMemoryBoundedLogMessageBuffer failures

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493608#comment-13493608
 ] 

Hudson commented on HBASE-6828:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6828. [WINDOWS] TestMemoryBoundedLogMessageBuffer failures (Revision 
1407298)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/monitoring/TestMemoryBoundedLogMessageBuffer.java


> [WINDOWS] TestMemoryBoundedLogMessageBuffer failures
> 
>
> Key: HBASE-6828
> URL: https://issues.apache.org/jira/browse/HBASE-6828
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6828_v1-0.94.patch, hbase-6828_v1-trunk.patch
>
>
> TestMemoryBoundedLogMessageBuffer fails because of a suspected \n line ending 
> difference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6827) [WINDOWS] TestScannerTimeout fails expecting a timeout

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493609#comment-13493609
 ] 

Hudson commented on HBASE-6827:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6827. [WINDOWS] TestScannerTimeout fails expecting a timeout 
(Revision 1407290)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java


> [WINDOWS] TestScannerTimeout fails expecting a timeout
> --
>
> Key: HBASE-6827
> URL: https://issues.apache.org/jira/browse/HBASE-6827
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.96.0
>
> Attachments: hbase-6827_v1-0.94.patch, hbase-6827_v1-trunk.patch
>
>
> TestScannerTimeout.test2481() fails with:
> {code}
> java.lang.AssertionError: We should be timing out
>   at org.junit.Assert.fail(Assert.java:93)
>   at 
> org.apache.hadoop.hbase.client.TestScannerTimeout.test2481(TestScannerTimeout.java:117)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6831) [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper session

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493607#comment-13493607
 ] 

Hudson commented on HBASE-6831:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6831. [WINDOWS] HBaseTestingUtility.expireSession() does not expire 
zookeeper session (Revision 1407300)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java


> [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper 
> session
> ---
>
> Key: HBASE-6831
> URL: https://issues.apache.org/jira/browse/HBASE-6831
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6831_v1-0.94.patch, hbase-6831_v1-trunk.patch
>
>
> TestReplicationPeer fails because it forces the zookeeper session expiration 
> by calling HBaseTestingUtilty.expireSesssion(), but that function fails to do 
> so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6823) [WINDOWS] TestSplitTransaction fails due to the Log handle not released by a call to DaughterOpener.start()

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493605#comment-13493605
 ] 

Hudson commented on HBASE-6823:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6823. [WINDOWS] TestSplitTransaction fails due to the Log handle not 
released by a call to DaughterOpener.start() (Revision 1407289)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java


> [WINDOWS] TestSplitTransaction fails due to the Log handle not released by a 
> call to DaughterOpener.start()
> ---
>
> Key: HBASE-6823
> URL: https://issues.apache.org/jira/browse/HBASE-6823
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6823_v1-0.94.patch, hbase-6823_v1-trunk.patch, 
> hbase-6823_v2-0.94.patch, hbase-6823_v2-trunk.patch
>
>
> There are two unit test cases in HBase RegionServer test failed in the clean 
> up stage that failed to delete the files/folders created in the test. 
> testWholesomeSplit(org.apache.hadoop.hbase.regionserver.TestSplitTransaction):
>  Failed delete of ./target/test-
> data/1c386abc-f159-492e-b21f-e89fab24d85b/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/a588d813fd26280c2b42e93565ed960c
> testRollback(org.apache.hadoop.hbase.regionserver.TestSplitTransaction): 
> Failed delete of ./target/test-data/6
> 1a1a14b-0cc9-4dd6-93fd-4dc021e2bfcc/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/8090abc89528461fa284288c257662cd
> The root cause is triggered by ta call to the DaughterOpener.start() in 
> \src\hbase\src\main\java\org\apache\hadoop\hbase\regionserver\SplitTransactopn.Java
>  (openDaughters() function). It left handles to the splited folder/file and 
> causing deleting of the file/folder failed in the Windows OS.
> Windows does not allow to delete a file, while there are open file handlers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6822) [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493606#comment-13493606
 ] 

Hudson commented on HBASE-6822:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6822. [WINDOWS] MiniZookeeperCluster multiple daemons bind to the 
same port (Revision 1407286)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java


> [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port
> -
>
> Key: HBASE-6822
> URL: https://issues.apache.org/jira/browse/HBASE-6822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.96.0
>
> Attachments: hbase-6822_v1-0.94.patch, hbase-6822_v1-trunk.patch
>
>
> TestHBaseTestingUtility.testMiniZooKeeper() tests whether the mini zk cluster 
> is working by launching 5 threads corresponding to zk servers. 
> NIOServerCnxnFactory.configure() configures the socket as:
> {code}
> this.ss = ServerSocketChannel.open();
> ss.socket().setReuseAddress(true);
> {code}
> setReuseAddress() is set, because it allows the server to come back up and 
> bind to the same port before the socket is timed-out by the kernel.
> Under windows, the behavior on ServerSocket.setReuseAddress() is different 
> than on linux, in which it allows any process to bind to an already-bound 
> port. This causes ZK nodes starting on the same node, to be able to bind to 
> the same port. 
> The following part of the patch at 
> https://issues.apache.org/jira/browse/HADOOP-8223 deals with this case for 
> Hadoop:
> {code}
> if(Shell.WINDOWS) {
> +  // result of setting the SO_REUSEADDR flag is different on Windows
> +  // http://msdn.microsoft.com/en-us/library/ms740621(v=vs.85).aspx
> +  // without this 2 NN's can start on the same machine and listen on 
> +  // the same port with indeterminate routing of incoming requests to 
> them
> +  ret.setReuseAddress(false);
> +}
> {code}
> We should do the same in Zookeeper (I'll open a ZOOK issue). But in the 
> meantime, we can fix hbase tests to not rely on BindException to resolve for 
> bind errors. Especially, in  MiniZKCluster.startup() when starting more than 
> 1 servers, we already know that we have to increment the port number. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6820) [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon shutdown()

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493604#comment-13493604
 ] 

Hudson commented on HBASE-6820:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6820. [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is 
closed upon shutdown() (Revision 1407287)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java


> [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon 
> shutdown()
> --
>
> Key: HBASE-6820
> URL: https://issues.apache.org/jira/browse/HBASE-6820
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6820_v1-0.94.patch, hbase-6820_v1-trunk.patch
>
>
> MiniZookeeperCluster.shutdown() shuts down the ZookeeperServer and 
> NIOServerCnxnFactory. However, MiniZookeeperCluster uses a deprecated 
> ZookeeperServer constructor, which in turn constructs its own FileTxnSnapLog, 
> and ZKDatabase. Since ZookeeperServer.shutdown() does not close() the 
> ZKDatabase, we have to explicitly close it in MiniZookeeperCluster.shutdown().
> Tests effected by this are
> {code}
> TestSplitLogManager
> TestSplitLogWorker
> TestOfflineMetaRebuildBase
> TestOfflineMetaRebuildHole
> TestOfflineMetaRebuildOverlap
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7122) Proper warning message when opening a log file with no entries (idle cluster)

2012-11-08 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-7122:
---

Attachment: HBase-7122.patch

Tested it on a cluster; it stops emitting exception and other behavior remains 
the same.

> Proper warning message when opening a log file with no entries (idle cluster)
> -
>
> Key: HBASE-7122
> URL: https://issues.apache.org/jira/browse/HBASE-7122
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 0.94.2
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
> Fix For: 0.96.0
>
> Attachments: HBase-7122.patch
>
>
> In case the cluster is idle and the log has rolled (offset to 0), 
> replicationSource tries to open the log and gets an EOF exception. This gets 
> printed after every 10 sec until an entry is inserted in it.
> {code}
> 2012-11-07 15:47:40,924 DEBUG regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(487)) - Opening log for replication 
> c0315.hal.cloudera.com%2C40020%2C1352324202860.1352327804874 at 0
> 2012-11-07 15:47:40,926 WARN  regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(543)) - 1 Got: 
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at java.io.DataInputStream.readFully(DataInputStream.java:152)
>   at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:55)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:175)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:716)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:491)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:290)
> 2012-11-07 15:47:40,927 WARN  regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(547)) - Waited too long for this file, 
> considering dumping
> 2012-11-07 15:47:40,927 DEBUG regionserver.ReplicationSource 
> (ReplicationSource.java:sleepForRetries(562)) - Unable to open a reader, 
> sleeping 1000 times 10
> {code}
> We should reduce the log spewing in this case (or some informative message, 
> based on the offset).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6466) Enable multi-thread for memstore flush

2012-11-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-6466:


Attachment: HBASE-6466v3.1.patch

> Enable multi-thread for memstore flush
> --
>
> Key: HBASE-6466
> URL: https://issues.apache.org/jira/browse/HBASE-6466
> Project: HBase
>  Issue Type: Improvement
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-6466.patch, HBASE-6466v2.patch, 
> HBASE-6466v3.1.patch, HBASE-6466v3.patch
>
>
> If the KV is large or Hlog is closed with high-pressure putting, we found 
> memstore is often above the high water mark and block the putting.
> So should we enable multi-thread for Memstore Flush?
> Some performance test data for reference,
> 1.test environment : 
> random writting;upper memstore limit 5.6GB;lower memstore limit 4.8GB;400 
> regions per regionserver;row len=50 bytes, value len=1024 bytes;5 
> regionserver, 300 ipc handler per regionserver;5 client, 50 thread handler 
> per client for writing
> 2.test results:
> one cacheFlush handler, tps: 7.8k/s per regionserver, Flush:10.1MB/s per 
> regionserver, appears many aboveGlobalMemstoreLimit blocking
> two cacheFlush handlers, tps: 10.7k/s per regionserver, Flush:12.46MB/s per 
> regionserver,
> 200 thread handler per client & two cacheFlush handlers, tps:16.1k/s per 
> regionserver, Flush:18.6MB/s per regionserver

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493575#comment-13493575
 ] 

Hadoop QA commented on HBASE-4913:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552725/HBASE-4913-94.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3275//console

This message is automatically generated.

> Per-CF compaction Via the Shell
> ---
>
> Key: HBASE-4913
> URL: https://issues.apache.org/jira/browse/HBASE-4913
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, regionserver
>Reporter: Nicolas Spiegelberg
>Assignee: Mubarak Seyed
> Fix For: 0.96.0, 0.94.4
>
> Attachments: HBASE-4913-94.patch, HBASE-4913-addendum.patch, 
> HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, 
> HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493573#comment-13493573
 ] 

Hadoop QA commented on HBASE-7130:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552711/trunk-7130.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol
  org.apache.hadoop.hbase.master.TestRollingRestart

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//console

This message is automatically generated.

> NULL qualifier is ignored
> -
>
> Key: HBASE-7130
> URL: https://issues.apache.org/jira/browse/HBASE-7130
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs
>Affects Versions: 0.96.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.96.0
>
> Attachments: trunk-7130.patch
>
>
> HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
> the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6466) Enable multi-thread for memstore flush

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493572#comment-13493572
 ] 

Sergey Shelukhin commented on HBASE-6466:
-

Updating trunk patch. I will run some tests...

> Enable multi-thread for memstore flush
> --
>
> Key: HBASE-6466
> URL: https://issues.apache.org/jira/browse/HBASE-6466
> Project: HBase
>  Issue Type: Improvement
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-6466.patch, HBASE-6466v2.patch, HBASE-6466v3.patch
>
>
> If the KV is large or Hlog is closed with high-pressure putting, we found 
> memstore is often above the high water mark and block the putting.
> So should we enable multi-thread for Memstore Flush?
> Some performance test data for reference,
> 1.test environment : 
> random writting;upper memstore limit 5.6GB;lower memstore limit 4.8GB;400 
> regions per regionserver;row len=50 bytes, value len=1024 bytes;5 
> regionserver, 300 ipc handler per regionserver;5 client, 50 thread handler 
> per client for writing
> 2.test results:
> one cacheFlush handler, tps: 7.8k/s per regionserver, Flush:10.1MB/s per 
> regionserver, appears many aboveGlobalMemstoreLimit blocking
> two cacheFlush handlers, tps: 10.7k/s per regionserver, Flush:12.46MB/s per 
> regionserver,
> 200 thread handler per client & two cacheFlush handlers, tps:16.1k/s per 
> regionserver, Flush:18.6MB/s per regionserver

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7109) integration tests on cluster are not getting picked up from distribution

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493568#comment-13493568
 ] 

Sergey Shelukhin commented on HBASE-7109:
-

forgot to rename when splitting them :) will rename. These are different 
responsibilities, I think it's a good idea to split them.

> integration tests on cluster are not getting picked up from distribution
> 
>
> Key: HBASE-7109
> URL: https://issues.apache.org/jira/browse/HBASE-7109
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.96.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-7109-squashed.patch, HBASE-7109-v2-squashed.patch
>
>
> The method of finding test classes only works on local build (or its full 
> copy), not if the distribution is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7109) integration tests on cluster are not getting picked up from distribution

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493569#comment-13493569
 ] 

Sergey Shelukhin commented on HBASE-7109:
-

ClassFinder can be used to find classes according to different rules, etc.

> integration tests on cluster are not getting picked up from distribution
> 
>
> Key: HBASE-7109
> URL: https://issues.apache.org/jira/browse/HBASE-7109
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.96.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-7109-squashed.patch, HBASE-7109-v2-squashed.patch
>
>
> The method of finding test classes only works on local build (or its full 
> copy), not if the distribution is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7109) integration tests on cluster are not getting picked up from distribution

2012-11-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493560#comment-13493560
 ] 

Ted Yu commented on HBASE-7109:
---

{code}
+public class ClassFinder {
{code}
Please add annotation for audience and stability.
{code}
+  public List> findClasses(String packageName, boolean 
proceedOnExceptions)
{code}
The above method calls findTestClassesFromFiles() and findTestClassesFromJar(). 
This gives me impression that ClassFinder is already geared towards finding 
test classes.
Can ClassFinder and ClassTestFinder be merged ?

> integration tests on cluster are not getting picked up from distribution
> 
>
> Key: HBASE-7109
> URL: https://issues.apache.org/jira/browse/HBASE-7109
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.96.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-7109-squashed.patch, HBASE-7109-v2-squashed.patch
>
>
> The method of finding test classes only works on local build (or its full 
> copy), not if the distribution is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6831) [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper session

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6831:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Stack for review. 

> [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper 
> session
> ---
>
> Key: HBASE-6831
> URL: https://issues.apache.org/jira/browse/HBASE-6831
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6831_v1-0.94.patch, hbase-6831_v1-trunk.patch
>
>
> TestReplicationPeer fails because it forces the zookeeper session expiration 
> by calling HBaseTestingUtilty.expireSesssion(), but that function fails to do 
> so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7109) integration tests on cluster are not getting picked up from distribution

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493559#comment-13493559
 ] 

Hadoop QA commented on HBASE-7109:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12552708/HBASE-7109-v2-squashed.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 10 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//console

This message is automatically generated.

> integration tests on cluster are not getting picked up from distribution
> 
>
> Key: HBASE-7109
> URL: https://issues.apache.org/jira/browse/HBASE-7109
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.96.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-7109-squashed.patch, HBASE-7109-v2-squashed.patch
>
>
> The method of finding test classes only works on local build (or its full 
> copy), not if the distribution is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6828) [WINDOWS] TestMemoryBoundedLogMessageBuffer failures

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6828:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Stack for review. 

> [WINDOWS] TestMemoryBoundedLogMessageBuffer failures
> 
>
> Key: HBASE-6828
> URL: https://issues.apache.org/jira/browse/HBASE-6828
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6828_v1-0.94.patch, hbase-6828_v1-trunk.patch
>
>
> TestMemoryBoundedLogMessageBuffer fails because of a suspected \n line ending 
> difference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7121) Fix TestHFileOutputFormat after moving RS to metrics2

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493552#comment-13493552
 ] 

Hudson commented on HBASE-7121:
---

Integrated in HBase-TRUNK #3521 (See 
[https://builds.apache.org/job/HBase-TRUNK/3521/])
HBASE-7121 Fix TestHFileOutputFormat after moving RS to metrics2 (Revision 
1407216)

 Result = FAILURE
eclark : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionWrapperImpl.java


> Fix TestHFileOutputFormat after moving RS to metrics2
> -
>
> Key: HBASE-7121
> URL: https://issues.apache.org/jira/browse/HBASE-7121
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 0.96.0
>
> Attachments: HBASE-7121-0.patch
>
>
> When spinning up lots of threads in a single jvm it's possible that the 
> metrics wrapper can touch variables that are not initialized.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493553#comment-13493553
 ] 

Hudson commented on HBASE-4913:
---

Integrated in HBase-TRUNK #3521 (See 
[https://builds.apache.org/job/HBase-TRUNK/3521/])
HBASE-4913 Per-CF compaction Via the Shell (Mubarak and Gregory) (Revision 
1407227)

 Result = FAILURE
gchanan : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* /hbase/trunk/hbase-server/src/main/protobuf/Admin.proto
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/compact.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/major_compact.rb
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionState.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java


> Per-CF compaction Via the Shell
> ---
>
> Key: HBASE-4913
> URL: https://issues.apache.org/jira/browse/HBASE-4913
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, regionserver
>Reporter: Nicolas Spiegelberg
>Assignee: Mubarak Seyed
> Fix For: 0.96.0, 0.94.4
>
> Attachments: HBASE-4913-94.patch, HBASE-4913-addendum.patch, 
> HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, 
> HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-6827) [WINDOWS] TestScannerTimeout fails expecting a timeout

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-6827.
--

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed

I've committed this. Thanks Stack for the review. 

> [WINDOWS] TestScannerTimeout fails expecting a timeout
> --
>
> Key: HBASE-6827
> URL: https://issues.apache.org/jira/browse/HBASE-6827
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.96.0
>
> Attachments: hbase-6827_v1-0.94.patch, hbase-6827_v1-trunk.patch
>
>
> TestScannerTimeout.test2481() fails with:
> {code}
> java.lang.AssertionError: We should be timing out
>   at org.junit.Assert.fail(Assert.java:93)
>   at 
> org.apache.hadoop.hbase.client.TestScannerTimeout.test2481(TestScannerTimeout.java:117)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-08 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-4913:
--

Attachment: HBASE-4913-94.patch

* Attached HBASE-4913-94.patch *

94 version of patch.

> Per-CF compaction Via the Shell
> ---
>
> Key: HBASE-4913
> URL: https://issues.apache.org/jira/browse/HBASE-4913
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, regionserver
>Reporter: Nicolas Spiegelberg
>Assignee: Mubarak Seyed
> Fix For: 0.96.0, 0.94.4
>
> Attachments: HBASE-4913-94.patch, HBASE-4913-addendum.patch, 
> HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, 
> HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-08 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-4913:
--

Attachment: HBASE-4913-addendum.patch

* Attached HBASE-4913-addendum.patch *

When doing some testing on the 94 patch, I noticed the ruby parsing isn't that 
great; if you have more arguments than are supported it just ignores the 
command rather than give you an error message.

> Per-CF compaction Via the Shell
> ---
>
> Key: HBASE-4913
> URL: https://issues.apache.org/jira/browse/HBASE-4913
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, regionserver
>Reporter: Nicolas Spiegelberg
>Assignee: Mubarak Seyed
> Fix For: 0.96.0, 0.94.4
>
> Attachments: HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, 
> HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, 
> HBASE-4913-trunk-v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6823) [WINDOWS] TestSplitTransaction fails due to the Log handle not released by a call to DaughterOpener.start()

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6823:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I've committed the v2 patch. It is just a rebase of the v1, w/o the imports. 
Thanks Stack for the review. 

> [WINDOWS] TestSplitTransaction fails due to the Log handle not released by a 
> call to DaughterOpener.start()
> ---
>
> Key: HBASE-6823
> URL: https://issues.apache.org/jira/browse/HBASE-6823
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6823_v1-0.94.patch, hbase-6823_v1-trunk.patch, 
> hbase-6823_v2-0.94.patch, hbase-6823_v2-trunk.patch
>
>
> There are two unit test cases in HBase RegionServer test failed in the clean 
> up stage that failed to delete the files/folders created in the test. 
> testWholesomeSplit(org.apache.hadoop.hbase.regionserver.TestSplitTransaction):
>  Failed delete of ./target/test-
> data/1c386abc-f159-492e-b21f-e89fab24d85b/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/a588d813fd26280c2b42e93565ed960c
> testRollback(org.apache.hadoop.hbase.regionserver.TestSplitTransaction): 
> Failed delete of ./target/test-data/6
> 1a1a14b-0cc9-4dd6-93fd-4dc021e2bfcc/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/8090abc89528461fa284288c257662cd
> The root cause is triggered by ta call to the DaughterOpener.start() in 
> \src\hbase\src\main\java\org\apache\hadoop\hbase\regionserver\SplitTransactopn.Java
>  (openDaughters() function). It left handles to the splited folder/file and 
> causing deleting of the file/folder failed in the Windows OS.
> Windows does not allow to delete a file, while there are open file handlers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6820) [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon shutdown()

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6820:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Stack for the review. 

> [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon 
> shutdown()
> --
>
> Key: HBASE-6820
> URL: https://issues.apache.org/jira/browse/HBASE-6820
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6820_v1-0.94.patch, hbase-6820_v1-trunk.patch
>
>
> MiniZookeeperCluster.shutdown() shuts down the ZookeeperServer and 
> NIOServerCnxnFactory. However, MiniZookeeperCluster uses a deprecated 
> ZookeeperServer constructor, which in turn constructs its own FileTxnSnapLog, 
> and ZKDatabase. Since ZookeeperServer.shutdown() does not close() the 
> ZKDatabase, we have to explicitly close it in MiniZookeeperCluster.shutdown().
> Tests effected by this are
> {code}
> TestSplitLogManager
> TestSplitLogWorker
> TestOfflineMetaRebuildBase
> TestOfflineMetaRebuildHole
> TestOfflineMetaRebuildOverlap
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6822) [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6822:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Stack for the review.

> [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port
> -
>
> Key: HBASE-6822
> URL: https://issues.apache.org/jira/browse/HBASE-6822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.96.0
>
> Attachments: hbase-6822_v1-0.94.patch, hbase-6822_v1-trunk.patch
>
>
> TestHBaseTestingUtility.testMiniZooKeeper() tests whether the mini zk cluster 
> is working by launching 5 threads corresponding to zk servers. 
> NIOServerCnxnFactory.configure() configures the socket as:
> {code}
> this.ss = ServerSocketChannel.open();
> ss.socket().setReuseAddress(true);
> {code}
> setReuseAddress() is set, because it allows the server to come back up and 
> bind to the same port before the socket is timed-out by the kernel.
> Under windows, the behavior on ServerSocket.setReuseAddress() is different 
> than on linux, in which it allows any process to bind to an already-bound 
> port. This causes ZK nodes starting on the same node, to be able to bind to 
> the same port. 
> The following part of the patch at 
> https://issues.apache.org/jira/browse/HADOOP-8223 deals with this case for 
> Hadoop:
> {code}
> if(Shell.WINDOWS) {
> +  // result of setting the SO_REUSEADDR flag is different on Windows
> +  // http://msdn.microsoft.com/en-us/library/ms740621(v=vs.85).aspx
> +  // without this 2 NN's can start on the same machine and listen on 
> +  // the same port with indeterminate routing of incoming requests to 
> them
> +  ret.setReuseAddress(false);
> +}
> {code}
> We should do the same in Zookeeper (I'll open a ZOOK issue). But in the 
> meantime, we can fix hbase tests to not rely on BindException to resolve for 
> bind errors. Especially, in  MiniZKCluster.startup() when starting more than 
> 1 servers, we already know that we have to increment the port number. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6826) [WINDOWS] TestFromClientSide failures

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6826:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Stack for the review. 

> [WINDOWS] TestFromClientSide failures
> -
>
> Key: HBASE-6826
> URL: https://issues.apache.org/jira/browse/HBASE-6826
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: windows
> Fix For: 0.96.0
>
> Attachments: hbase-6826_v1-0.94.patch, hbase-6826_v1-trunk.patch, 
> hbase-6826_v2-0.94.patch, hbase-6826_v2-trunk.patch
>
>
> The following tests fail for TestFromClientSide: 
> {code}
> testPoolBehavior()
> testClientPoolRoundRobin()
> testClientPoolThreadLocal()
> {code}
> The first test fails due to the fact that the test (wrongly) assumes that 
> ThredPoolExecutor can reclaim the thread immediately. 
> The second and third tests seem to fail because that Put's to the table does 
> not specify an explicit timestamp, but on windows, consecutive calls to put 
> happen to finish in the same milisecond so that the resulting mutations have 
> the same timestamp, thus there is only one version of the cell value.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7110) refactor the compaction selection and config code similarly to 0.89-fb changes

2012-11-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7110:


Attachment: HBASE-7110-v6-squashed.patch

> refactor the compaction selection and config code similarly to 0.89-fb changes
> --
>
> Key: HBASE-7110
> URL: https://issues.apache.org/jira/browse/HBASE-7110
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.96.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-6371-v5-refactor-only-squashed.patch, 
> HBASE-7110-v6-squashed.patch
>
>
> Separate JIRA for refactoring changes from HBASE-7055 (and further ones after 
> code review)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7110) refactor the compaction selection and config code similarly to 0.89-fb changes

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493535#comment-13493535
 ] 

Sergey Shelukhin commented on HBASE-7110:
-

updated

> refactor the compaction selection and config code similarly to 0.89-fb changes
> --
>
> Key: HBASE-7110
> URL: https://issues.apache.org/jira/browse/HBASE-7110
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.96.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-6371-v5-refactor-only-squashed.patch, 
> HBASE-7110-v6-squashed.patch
>
>
> Separate JIRA for refactoring changes from HBASE-7055 (and further ones after 
> code review)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser

2012-11-08 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-7115:
--

Description: 
HBASE-5428 added this capability to thrift interface but the configuration 
parameter name is "thrift" specific.

This patch introduces a more generic parameter "hbase.user.filters" using which 
the user defined custom filters can be specified in the configuration and 
loaded in any client that needs to use the filter language parser.

The patch then uses this new parameter to register any user specified filters 
while invoking the HBase shell.

Example usage: Let's say I have written a couple of custom filters with class 
names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
*{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
use them from HBase shell using the filter language.

To do that, I would add the following configuration to {{hbase-site.xml}}

{panel}{{}}
{{  hbase.user.filters}}
{{  }}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}
{{}}{panel}

Once this is configured, I can launch HBase shell and use these filters in my 
{{get}} or {{scan}} just the way I would use a built-in filter.

{code}
hbase(main):001:0> scan 't', {FILTER => "SuperDuperFilter(true) AND 
SilverBulletFilter(42)"}
ROW  COLUMN+CELL
 status  column=cf:a, 
timestamp=30438552, value=world_peace
1 row(s) in 0. seconds
{code}

To use this feature in any client, the client needs to make the following 
function call as part of its initialization.
{code}
ParseFilter.registerUserFilters(configuration);
{code}

  was:
HBASE-5428 added this capability to thrift interface but the configuration 
parameter name is "thrift" specific.

This patch introduces a more generic parameter "hbase.user.filters" using which 
the user defined custom filters can be specified in the configuration and 
loaded in any client that needs to use the filter language parser.

The patch then uses this new parameter to register any user specified filters 
while invoking the HBase shell.

Example usage: Let's say I have written a couple of custom filters with class 
names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
*{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
use them from HBase shell using the filter language.

To do that, I would add the following configuration to {{hbase-site.xml}}

{panel}{{}}
{{  hbase.user.filters}}
{{  }}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}
{{}}{panel}

Once this is configured, I can launch HBase shell and use these filters in my 
{{get}} or {{scan}} just the way I would use a built-in filter.

{code}
hbase(main):001:0> scan 't', {FILTER => "SuperDuperFilter(true) AND 
SilverBulletFilter(42)"}
ROW  COLUMN+CELL
 status  column=cf:a, 
timestamp=30438552, value=world_peace
1 row(s) in 0. seconds
{code}


> [shell] Provide a way to register custom filters with the Filter Language 
> Parser
> 
>
> Key: HBASE-7115
> URL: https://issues.apache.org/jira/browse/HBASE-7115
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters, shell
>Affects Versions: 0.96.0
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
> Fix For: 0.96.0
>
> Attachments: HBASE-7115_trunk.patch
>
>
> HBASE-5428 added this capability to thrift interface but the configuration 
> parameter name is "thrift" specific.
> This patch introduces a more generic parameter "hbase.user.filters" using 
> which the user defined custom filters can be specified in the configuration 
> and loaded in any client that needs to use the filter language parser.
> The patch then uses this new parameter to register any user specified filters 
> while invoking the HBase shell.
> Example usage: Let's say I have written a couple of custom filters with class 
> names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
> *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
> use them from HBase shell using the filter language.
> To do that, I would add the following configuration to {{hbase-site.xml}}
> {panel}{{}}
> {{  hbase.user.filters}}
> {{  }}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.

[jira] [Commented] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser

2012-11-08 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493520#comment-13493520
 ] 

Aditya Kishore commented on HBASE-7115:
---

And yes, this only registers the custom filters with the Filter Language Parser 
and not does not add the JARS to client/server class path. Let me think about 
it. Probably we can load the filter jars in the same way co-processors jars are 
picked.

> [shell] Provide a way to register custom filters with the Filter Language 
> Parser
> 
>
> Key: HBASE-7115
> URL: https://issues.apache.org/jira/browse/HBASE-7115
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters, shell
>Affects Versions: 0.96.0
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
> Fix For: 0.96.0
>
> Attachments: HBASE-7115_trunk.patch
>
>
> HBASE-5428 added this capability to thrift interface but the configuration 
> parameter name is "thrift" specific.
> This patch introduces a more generic parameter "hbase.user.filters" using 
> which the user defined custom filters can be specified in the configuration 
> and loaded in any client that needs to use the filter language parser.
> The patch then uses this new parameter to register any user specified filters 
> while invoking the HBase shell.
> Example usage: Let's say I have written a couple of custom filters with class 
> names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
> *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
> use them from HBase shell using the filter language.
> To do that, I would add the following configuration to {{hbase-site.xml}}
> {panel}{{}}
> {{  hbase.user.filters}}
> {{  }}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}
> {{}}{panel}
> Once this is configured, I can launch HBase shell and use these filters in my 
> {{get}} or {{scan}} just the way I would use a built-in filter.
> {code}
> hbase(main):001:0> scan 't', {FILTER => "SuperDuperFilter(true) AND 
> SilverBulletFilter(42)"}
> ROW  COLUMN+CELL
>  status  column=cf:a, 
> timestamp=30438552, value=world_peace
> 1 row(s) in 0. seconds
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser

2012-11-08 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493516#comment-13493516
 ] 

Aditya Kishore commented on HBASE-7115:
---

[~stack] Have updated the JIRA description.

> [shell] Provide a way to register custom filters with the Filter Language 
> Parser
> 
>
> Key: HBASE-7115
> URL: https://issues.apache.org/jira/browse/HBASE-7115
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters, shell
>Affects Versions: 0.96.0
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
> Fix For: 0.96.0
>
> Attachments: HBASE-7115_trunk.patch
>
>
> HBASE-5428 added this capability to thrift interface but the configuration 
> parameter name is "thrift" specific.
> This patch introduces a more generic parameter "hbase.user.filters" using 
> which the user defined custom filters can be specified in the configuration 
> and loaded in any client that needs to use the filter language parser.
> The patch then uses this new parameter to register any user specified filters 
> while invoking the HBase shell.
> Example usage: Let's say I have written a couple of custom filters with class 
> names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
> *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
> use them from HBase shell using the filter language.
> To do that, I would add the following configuration to {{hbase-site.xml}}
> {panel}{{}}
> {{  hbase.user.filters}}
> {{  }}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}
> {{}}{panel}
> Once this is configured, I can launch HBase shell and use these filters in my 
> {{get}} or {{scan}} just the way I would use a built-in filter.
> {code}
> hbase(main):001:0> scan 't', {FILTER => "SuperDuperFilter(true) AND 
> SilverBulletFilter(42)"}
> ROW  COLUMN+CELL
>  status  column=cf:a, 
> timestamp=30438552, value=world_peace
> 1 row(s) in 0. seconds
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7130:
---

Status: Patch Available  (was: Open)

> NULL qualifier is ignored
> -
>
> Key: HBASE-7130
> URL: https://issues.apache.org/jira/browse/HBASE-7130
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs
>Affects Versions: 0.96.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.96.0
>
> Attachments: trunk-7130.patch
>
>
> HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
> the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7130:
---

Attachment: trunk-7130.patch

> NULL qualifier is ignored
> -
>
> Key: HBASE-7130
> URL: https://issues.apache.org/jira/browse/HBASE-7130
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs
>Affects Versions: 0.96.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.96.0
>
> Attachments: trunk-7130.patch
>
>
> HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
> the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7109) integration tests on cluster are not getting picked up from distribution

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493507#comment-13493507
 ] 

Sergey Shelukhin commented on HBASE-7109:
-

renamed class and parameter, added some javadocs

> integration tests on cluster are not getting picked up from distribution
> 
>
> Key: HBASE-7109
> URL: https://issues.apache.org/jira/browse/HBASE-7109
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.96.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-7109-squashed.patch, HBASE-7109-v2-squashed.patch
>
>
> The method of finding test classes only works on local build (or its full 
> copy), not if the distribution is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >