[jira] [Created] (HBASE-5093) wiki update for HBase/Scala

2011-12-25 Thread Joe Stein (Created) (JIRA)
wiki update for HBase/Scala
---

 Key: HBASE-5093
 URL: https://issues.apache.org/jira/browse/HBASE-5093
 Project: HBase
  Issue Type: Improvement
Reporter: Joe Stein


I tried to edit the wiki but it says immutable page

would be helpful/nice for folks to know how to get sbt working with Scala

the following is what I did to get it working, not sure why could not edit the 
wiki figure i open a JIRA so someone with access could update this

{code}

resolvers += "Apache HBase" at 
"https://repository.apache.org/content/repositories/releases";

libraryDependencies ++= Seq(
"org.apache.hadoop" % "hadoop-core" % "0.20.2",
"org.apache.hbase" % "hbase" % "0.90.4"
)

{code}

or let me access it and I can do it, np

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4218) Data Block Encoding of KeyValues (aka delta encoding / prefix compression)

2011-12-25 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-4218:
--

Comment: was deleted

(was: Integrated in HBase-TRUNK #2576 (See 
[https://builds.apache.org/job/HBase-TRUNK/2576/])
HBASE-4218 Data Block Encoding of KeyValues - revert, problems were 
uncovered during cluster testing

tedyu : 
Files : 
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/BitsetKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CompressionState.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncodings.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncoderBufferTooSmallException.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV1.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaConfigured.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
* /hbase/trunk/src/main/ruby/hbase/admin.rb
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/RedundantKVGenerator.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java
* 
/hbase/trunk/src/test/java/org/apache/had

[jira] [Updated] (HBASE-4218) Data Block Encoding of KeyValues (aka delta encoding / prefix compression)

2011-12-25 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-4218:
--

Comment: was deleted

(was: Integrated in HBase-TRUNK-security #48 (See 
[https://builds.apache.org/job/HBase-TRUNK-security/48/])
HBASE-4218 Data Block Encoding of KeyValues - revert, problems were 
uncovered during cluster testing

tedyu : 
Files : 
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/BitsetKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CompressionState.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncodings.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncoderBufferTooSmallException.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV1.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaConfigured.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
* /hbase/trunk/src/main/ruby/hbase/admin.rb
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/RedundantKVGenerator.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java
* 
/hbase/trunk/src/test/java/

[jira] [Updated] (HBASE-4218) Data Block Encoding of KeyValues (aka delta encoding / prefix compression)

2011-12-25 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-4218:
--

Comment: was deleted

(was: Integrated in HBase-TRUNK-security #47 (See 
[https://builds.apache.org/job/HBase-TRUNK-security/47/])
HBASE-4218 Data Block Encoding of KeyValues (aka delta encoding / prefix 
compression) - files used for testing
HBASE-4218 Data Block Encoding of KeyValues (aka delta encoding / prefix 
compression) (Jacek, Mikhail)

tedyu : 
Files : 
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/RedundantKVGenerator.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java

tedyu : 
Files : 
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/BitsetKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CompressionState.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncodings.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncoderBufferTooSmallException.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV1.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaConfigured.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
* /hbase/trunk/src/main/ruby/hbase/admin.rb
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
* 
/hbase/trunk/src/test/java/org

[jira] [Updated] (HBASE-4218) Data Block Encoding of KeyValues (aka delta encoding / prefix compression)

2011-12-25 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-4218:
--

Comment: was deleted

(was: Integrated in HBase-TRUNK #2573 (See 
[https://builds.apache.org/job/HBase-TRUNK/2573/])
HBASE-4218 Data Block Encoding of KeyValues (aka delta encoding / prefix 
compression) - files used for testing
HBASE-4218 Data Block Encoding of KeyValues (aka delta encoding / prefix 
compression) (Jacek, Mikhail)

tedyu : 
Files : 
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/RedundantKVGenerator.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java

tedyu : 
Files : 
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/BitsetKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CompressionState.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncodings.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncoderBufferTooSmallException.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV1.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaConfigured.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
* /hbase/trunk/src/main/ruby/hbase/admin.rb
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop

[jira] [Commented] (HBASE-4224) Need a flush by regionserver rather than by table option

2011-12-25 Thread jirapos...@reviews.apache.org (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175844#comment-13175844
 ] 

jirapos...@reviews.apache.org commented on HBASE-4224:
--


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/3308/#review4116
---


Please give us some results from testing in cluster.


/src/main/java/org/apache/hadoop/hbase/ServerName.java


I think we should perform stricter checking on hostname, without using DNS.
See 
http://regexlib.com/DisplayPatterns.aspx?cattabindex=1&categoryId=2&AspxAutoDetectCookieSupport=1



/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


The 'execute' after 'the' should be removed.



/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


The ctor with ThreadFactory parameter should be used so that threads in 
this pool can have names.



/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


Second component should read 'all regions on a region server'



/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


serverRegionsMap might be null upon return.
I don't see null check below.



/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


Should read 'every region server'



/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


Should read 'whose WAL'



/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


I think Future.get(long timeout, TimeUnit unit) should be used here so that 
we don't wait indefinitely.


- Ted


On 2011-12-24 04:31:50, Akash  Ashok wrote:
bq.  
bq.  ---
bq.  This is an automatically generated e-mail. To reply, visit:
bq.  https://reviews.apache.org/r/3308/
bq.  ---
bq.  
bq.  (Updated 2011-12-24 04:31:50)
bq.  
bq.  
bq.  Review request for hbase.
bq.  
bq.  
bq.  Summary
bq.  ---
bq.  
bq.  Flush by RegionServer
bq.  
bq.  
bq.  This addresses bug HBase-4224.
bq.  https://issues.apache.org/jira/browse/HBase-4224
bq.  
bq.  
bq.  Diffs
bq.  -
bq.  
bq./src/main/java/org/apache/hadoop/hbase/ServerName.java 1222902 
bq./src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java 1222902 
bq./src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java 1222902 
bq./src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java 
1222902 
bq.  
bq.  Diff: https://reviews.apache.org/r/3308/diff
bq.  
bq.  
bq.  Testing
bq.  ---
bq.  
bq.  
bq.  Thanks,
bq.  
bq.  Akash
bq.  
bq.



> Need a flush by regionserver rather than by table option
> 
>
> Key: HBASE-4224
> URL: https://issues.apache.org/jira/browse/HBASE-4224
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: stack
>Assignee: Akash Ashok
> Attachments: HBase-4224-v2.patch, HBase-4224.patch
>
>
> This evening needed to clean out logs on the cluster.  logs are by 
> regionserver.  to let go of logs, we need to have all edits emptied from 
> memory.  only flush is by table or region.  We need to be able to flush the 
> regionserver.  Need to add this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4608) HLog Compression

2011-12-25 Thread Li Pi (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175855#comment-13175855
 ] 

Li Pi commented on HBASE-4608:
--

Okay. I'm confused.

I disabled compression, went back to trunk, and changed these lines of code in 
HLogKey

System.out.println("Writing region: " + this.encodedRegionName.hashCode());
Bytes.writeByteArray(out, this.encodedRegionName);
System.out.println("Writing table: " + this.tablename.hashCode());
Bytes.writeByteArray(out, this.tablename);

And

in.readFully(this.encodedRegionName);
System.out.println("Reading region: " + this.encodedRegionName.hashCode());
this.tablename = Bytes.readByteArray(in);
System.out.println("Reading table: " + this.tablename.hashCode());

then I ran test replay after partial flush.

Got this as output

PositionWritten 124
Writing region: 1251181435
Writing table: 446506621
PositionWritten 319
Writing region: 1251181435
Writing table: 446506621
PositionWritten 514
Writing region: 1251181435
Writing table: 446506621
PositionWritten 709
Writing region: 1251181435
Writing table: 446506621
PositionWritten 904
Writing region: 1251181435
Writing table: 446506621
PositionWritten 1099
Writing region: 1251181435
Writing table: 446506621
PositionWritten 1294
Writing region: 1251181435
Writing table: 446506621
PositionWritten 1489
Writing region: 1251181435
Writing table: 446506621
PositionWritten 1684
Writing region: 1251181435
Writing table: 446506621
PositionWritten 1879
Writing region: 1251181435
Writing table: 446506621
PositionWritten 2074
Writing region: 1251181435
Writing table: 446506621
PositionWritten 2289
Writing region: 1251181435
Writing table: 446506621
PositionWritten 2484
Writing region: 1251181435
Writing table: 446506621
PositionWritten 2679
Writing region: 1251181435
Writing table: 446506621
PositionWritten 2874
Writing region: 1251181435
Writing table: 446506621
PositionWritten 3069
Writing region: 1251181435
Writing table: 446506621
PositionWritten 3264
Writing region: 1251181435
Writing table: 446506621
PositionWritten 3459
Writing region: 1251181435
Writing table: 446506621
PositionWritten 3654
Writing region: 1251181435
Writing table: 446506621
PositionWritten 3849
Writing region: 1251181435
Writing table: 446506621
PositionWritten 4044
Writing region: 1251181435
Writing table: 446506621
PositionWritten 4239
Writing region: 1251181435
Writing table: 446506621
PositionWritten 4454
Writing region: 1251181435
Writing table: 446506621
PositionWritten 4649
Writing region: 1251181435
Writing table: 446506621
PositionWritten 4844
Writing region: 1251181435
Writing table: 446506621
PositionWritten 5039
Writing region: 1251181435
Writing table: 446506621
PositionWritten 5234
Writing region: 1251181435
Writing table: 446506621
PositionWritten 5429
Writing region: 1251181435
Writing table: 446506621
PositionWritten 5624
Writing region: 1251181435
Writing table: 446506621
PositionWritten 5819
Writing region: 1251181435
Writing table: 446506621
PositionWritten 124
Writing region: 736259394
Writing table: 510860944
PositionWritten 319
Writing region: 1336786910
Writing table: 403681456
PositionWritten 514
Writing region: 1336786910
Writing table: 403681456
PositionWritten 709
Writing region: 1336786910
Writing table: 403681456
PositionWritten 904
Writing region: 1336786910
Writing table: 403681456
PositionWritten 1099
Writing region: 1336786910
Writing table: 403681456
PositionWritten 1294
Writing region: 1336786910
Writing table: 403681456
PositionWritten 1489
Writing region: 1336786910
Writing table: 403681456
PositionWritten 1684
Writing region: 1336786910
Writing table: 403681456
PositionWritten 1879
Writing region: 1336786910
Writing table: 403681456
PositionWritten 2074
Writing region: 1336786910
Writing table: 403681456
PositionWritten 2289
Writing region: 1336786910
Writing table: 403681456
PositionWritten 2484
Writing region: 1336786910
Writing table: 403681456
PositionWritten 2679
Writing region: 1336786910
Writing table: 403681456
PositionWritten 2874
Writing region: 1336786910
Writing table: 403681456
PositionWritten 3069
Writing region: 1336786910
Writing table: 403681456
PositionWritten 3264
Writing region: 1336786910
Writing table: 403681456
PositionWritten 3459
Writing region: 1336786910
Writing table: 403681456
PositionWritten 3654
Writing region: 1336786910
Writing table: 403681456
PositionWritten 3849
Writing region: 1336786910
Writing table: 403681456
PositionWritten 4044
Writing region: 1336786910
Writing table: 403681456
PositionWritten 4239
Writing region: 1336786910
Writing table: 403681456
PositionWritten 4454
Writing region: 1336786910
Writing table: 403681456
PositionWritten 4649
Writing region: 1336786910
Writing table: 403681456
PositionWritten 4844
Writing region: 1336786910
Writing table: 403681456
PositionWritten 5039
Writing region: 1336786910
Writing table: 403681456
PositionWritten 5234
Writing region: 13367

[jira] [Updated] (HBASE-5010) Filter HFiles based on TTL

2011-12-25 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5010:
---

Attachment: D1017.1.patch

mbautin requested code review of "[jira] [HBASE-5010] Filter HFiles based on 
TTL".
Reviewers: Kannan, Liyin, tedyu, JIRA

  This is the trunk version of D909. The main difference is that there is a 
minVersions CF setting in trunk, and when minVersions is not zero, we can't 
exclude StoreFiles based on TTL, because we might have to retrieve KVs with 
expired timestamps to comply with the minVersions requirement.

TEST PLAN
  Unit tests (including a new one).

REVISION DETAIL
  https://reviews.facebook.net/D1017

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
  src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java
  src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueScanner.java
  src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
  src/main/java/org/apache/hadoop/hbase/regionserver/NonLazyKeyValueScanner.java
  src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
  
src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java
  src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
  src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
  src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
  src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
  src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
  
src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java
  
src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestMinVersions.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java
  
src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java
  
src/test/java/org/apache/hadoop/hbase/regionserver/TestSelectScannersUsingTTL.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/2127/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


> Filter HFiles based on TTL
> --
>
> Key: HBASE-5010
> URL: https://issues.apache.org/jira/browse/HBASE-5010
> Project: HBase
>  Issue Type: Bug
>Reporter: Mikhail Bautin
>Assignee: Mikhail Bautin
> Attachments: D1017.1.patch, D909.1.patch, D909.2.patch
>
>
> In ScanWildcardColumnTracker we have
> {code:java}
>  
>   this.oldestStamp = EnvironmentEdgeManager.currentTimeMillis() - ttl;
>   ...
>   private boolean isExpired(long timestamp) {
> return timestamp < oldestStamp;
>   }
> {code}
> but this time range filtering does not participate in HFile selection. In one 
> real case this caused next() calls to time out because all KVs in a table got 
> expired, but next() had to iterate over the whole table to find that out. We 
> should be able to filter out those HFiles right away. I think a reasonable 
> approach is to add a "default timerange filter" to every scan for a CF with a 
> finite TTL and utilize existing filtering in 
> StoreFile.Reader.passesTimerangeFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5010) Filter HFiles based on TTL

2011-12-25 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5010:
--

Status: Patch Available  (was: Open)

> Filter HFiles based on TTL
> --
>
> Key: HBASE-5010
> URL: https://issues.apache.org/jira/browse/HBASE-5010
> Project: HBase
>  Issue Type: Bug
>Reporter: Mikhail Bautin
>Assignee: Mikhail Bautin
> Attachments: D1017.1.patch, D909.1.patch, D909.2.patch
>
>
> In ScanWildcardColumnTracker we have
> {code:java}
>  
>   this.oldestStamp = EnvironmentEdgeManager.currentTimeMillis() - ttl;
>   ...
>   private boolean isExpired(long timestamp) {
> return timestamp < oldestStamp;
>   }
> {code}
> but this time range filtering does not participate in HFile selection. In one 
> real case this caused next() calls to time out because all KVs in a table got 
> expired, but next() had to iterate over the whole table to find that out. We 
> should be able to filter out those HFiles right away. I think a reasonable 
> approach is to add a "default timerange filter" to every scan for a CF with a 
> finite TTL and utilize existing filtering in 
> StoreFile.Reader.passesTimerangeFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5010) Filter HFiles based on TTL

2011-12-25 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175859#comment-13175859
 ] 

Phabricator commented on HBASE-5010:


tedyu has commented on the revision "[jira] [HBASE-5010] Filter HFiles based on 
TTL".

  Looks good.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java:1207 Should 
we mention oldestUnexpiredTS here ?

REVISION DETAIL
  https://reviews.facebook.net/D1017


> Filter HFiles based on TTL
> --
>
> Key: HBASE-5010
> URL: https://issues.apache.org/jira/browse/HBASE-5010
> Project: HBase
>  Issue Type: Bug
>Reporter: Mikhail Bautin
>Assignee: Mikhail Bautin
> Attachments: D1017.1.patch, D909.1.patch, D909.2.patch
>
>
> In ScanWildcardColumnTracker we have
> {code:java}
>  
>   this.oldestStamp = EnvironmentEdgeManager.currentTimeMillis() - ttl;
>   ...
>   private boolean isExpired(long timestamp) {
> return timestamp < oldestStamp;
>   }
> {code}
> but this time range filtering does not participate in HFile selection. In one 
> real case this caused next() calls to time out because all KVs in a table got 
> expired, but next() had to iterate over the whole table to find that out. We 
> should be able to filter out those HFiles right away. I think a reasonable 
> approach is to add a "default timerange filter" to every scan for a CF with a 
> finite TTL and utilize existing filtering in 
> StoreFile.Reader.passesTimerangeFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5010) Filter HFiles based on TTL

2011-12-25 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175860#comment-13175860
 ] 

Hadoop QA commented on HBASE-5010:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12508613/D1017.1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 29 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/598//console

This message is automatically generated.

> Filter HFiles based on TTL
> --
>
> Key: HBASE-5010
> URL: https://issues.apache.org/jira/browse/HBASE-5010
> Project: HBase
>  Issue Type: Bug
>Reporter: Mikhail Bautin
>Assignee: Mikhail Bautin
> Attachments: D1017.1.patch, D909.1.patch, D909.2.patch
>
>
> In ScanWildcardColumnTracker we have
> {code:java}
>  
>   this.oldestStamp = EnvironmentEdgeManager.currentTimeMillis() - ttl;
>   ...
>   private boolean isExpired(long timestamp) {
> return timestamp < oldestStamp;
>   }
> {code}
> but this time range filtering does not participate in HFile selection. In one 
> real case this caused next() calls to time out because all KVs in a table got 
> expired, but next() had to iterate over the whole table to find that out. We 
> should be able to filter out those HFiles right away. I think a reasonable 
> approach is to add a "default timerange filter" to every scan for a CF with a 
> finite TTL and utilize existing filtering in 
> StoreFile.Reader.passesTimerangeFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5010) Filter HFiles based on TTL

2011-12-25 Thread Lars Hofhansl (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175861#comment-13175861
 ] 

Lars Hofhansl commented on HBASE-5010:
--

MinVersions is my doing. I'll take a look as soon as I get a chance. 




> Filter HFiles based on TTL
> --
>
> Key: HBASE-5010
> URL: https://issues.apache.org/jira/browse/HBASE-5010
> Project: HBase
>  Issue Type: Bug
>Reporter: Mikhail Bautin
>Assignee: Mikhail Bautin
> Attachments: D1017.1.patch, D909.1.patch, D909.2.patch
>
>
> In ScanWildcardColumnTracker we have
> {code:java}
>  
>   this.oldestStamp = EnvironmentEdgeManager.currentTimeMillis() - ttl;
>   ...
>   private boolean isExpired(long timestamp) {
> return timestamp < oldestStamp;
>   }
> {code}
> but this time range filtering does not participate in HFile selection. In one 
> real case this caused next() calls to time out because all KVs in a table got 
> expired, but next() had to iterate over the whole table to find that out. We 
> should be able to filter out those HFiles right away. I think a reasonable 
> approach is to add a "default timerange filter" to every scan for a CF with a 
> finite TTL and utilize existing filtering in 
> StoreFile.Reader.passesTimerangeFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4218) Data Block Encoding of KeyValues (aka delta encoding / prefix compression)

2011-12-25 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175880#comment-13175880
 ] 

Phabricator commented on HBASE-4218:


tedyu has commented on the revision "[jira] [HBASE-4218] HFile data block 
encoding (delta encoding)".

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/regionserver/Store.java:294 This method 
should be made package private.

REVISION DETAIL
  https://reviews.facebook.net/D447


> Data Block Encoding of KeyValues  (aka delta encoding / prefix compression)
> ---
>
> Key: HBASE-4218
> URL: https://issues.apache.org/jira/browse/HBASE-4218
> Project: HBase
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.94.0
>Reporter: Jacek Migdal
>Assignee: Mikhail Bautin
>  Labels: compression
> Fix For: 0.94.0
>
> Attachments: 0001-Delta-encoding-fixed-encoded-scanners.patch, 
> 0001-Delta-encoding.patch, D447.1.patch, D447.10.patch, D447.11.patch, 
> D447.12.patch, D447.13.patch, D447.2.patch, D447.3.patch, D447.4.patch, 
> D447.5.patch, D447.6.patch, D447.7.patch, D447.8.patch, D447.9.patch, 
> Data-block-encoding-2011-12-23.patch, 
> Delta-encoding.patch-2011-12-22_11_52_07.patch, 
> Delta_encoding_with_memstore_TS.patch, open-source.diff
>
>
> A compression for keys. Keys are sorted in HFile and they are usually very 
> similar. Because of that, it is possible to design better compression than 
> general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save 
> memory in cache as well as speeding seeks within HFileBlocks. It should 
> improve performance a lot, if key lengths are larger than value lengths. For 
> example, it makes a lot of sense to use it when value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) 
> shows that I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than 
> LZO). Moreover, it should allow far more efficient seeking which should 
> improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the 
> savings are due to prefix compression, int128 encoding, timestamp diffs and 
> bitfields to avoid duplication. That way, comparisons of compressed data can 
> be much faster than a byte comparator (thanks to prefix compression and 
> bitfields).
> In order to implement it in HBase two important changes in design will be 
> needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking 
> and iterating; access to uncompressed buffer in HFileBlock will have bad 
> performance
> -extend comparators to support comparison assuming that N first bytes are 
> equal (or some fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5010) Filter HFiles based on TTL

2011-12-25 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175883#comment-13175883
 ] 

Phabricator commented on HBASE-5010:


lhofhansl has commented on the revision "[jira] [HBASE-5010] Filter HFiles 
based on TTL".

  Looks pretty good. See minor comments inline.
  minVersions handling looks correct to me.

  I think we should call out somewhere that this optimization cannot be used 
with MIN_VERSIONS enabled, along with some guess for the typical performance 
penalty.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java:737 As said 
in my 0.90 review, this is better package private with the test class in the 
same package.
  
src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java:74
 Javadoc for oldestUnexpiredTS is missing.
  src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java:134 
Javadoc for oldestUnexpiredTS

REVISION DETAIL
  https://reviews.facebook.net/D1017


> Filter HFiles based on TTL
> --
>
> Key: HBASE-5010
> URL: https://issues.apache.org/jira/browse/HBASE-5010
> Project: HBase
>  Issue Type: Bug
>Reporter: Mikhail Bautin
>Assignee: Mikhail Bautin
> Attachments: D1017.1.patch, D909.1.patch, D909.2.patch
>
>
> In ScanWildcardColumnTracker we have
> {code:java}
>  
>   this.oldestStamp = EnvironmentEdgeManager.currentTimeMillis() - ttl;
>   ...
>   private boolean isExpired(long timestamp) {
> return timestamp < oldestStamp;
>   }
> {code}
> but this time range filtering does not participate in HFile selection. In one 
> real case this caused next() calls to time out because all KVs in a table got 
> expired, but next() had to iterate over the whole table to find that out. We 
> should be able to filter out those HFiles right away. I think a reasonable 
> approach is to add a "default timerange filter" to every scan for a CF with a 
> finite TTL and utilize existing filtering in 
> StoreFile.Reader.passesTimerangeFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5088) A concurrency issue on SoftValueSortedMap

2011-12-25 Thread Jieshan Bean (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175887#comment-13175887
 ] 

Jieshan Bean commented on HBASE-5088:
-

@Ted,
At first, I also thought we would get a higher performance with this patch, 
because all the keywords of "synchronized" removed. But it slowdown.
I agree with the explaination from Lars.
Our JDK version is 1.6.0_22. And the below is our OS information:
{noformat}
C3S3:~ # cat /proc/version
Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 
[gcc-4_ 3-branch revision 152973] 
(SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200
C3S3:/proc # lsb_release -a
LSB Version:
core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarch
Distributor ID: SUSE LINUX
Description:SUSE Linux Enterprise Server 11 (x86_64)
Release:11
Codename:   n/a
{noformat}

We'll take more tests accross the read vs write and give out the results.

@Lars,
Sorry, I didn't do another comparison with SoftvalueSortedMap replaced by 
ConcurrentSkiplistMap.And am planning to do it. Including the functional test 
and the performance test. And then, we can choose a better one.

> A concurrency issue on SoftValueSortedMap
> -
>
> Key: HBASE-5088
> URL: https://issues.apache.org/jira/browse/HBASE-5088
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.4, 0.94.0
>Reporter: Jieshan Bean
>Assignee: Jieshan Bean
> Attachments: 5088-useMapInterfaces.txt, 5088.generics.txt, 
> HBase-5088-90.patch, HBase-5088-trunk.patch, HBase5088Reproduce.java
>
>
> SoftValueSortedMap is backed by a TreeMap. All the methods in this class are 
> synchronized. If we use this method to add/delete elements, it's ok.
> But in HConnectionManager#getCachedLocation, it use headMap to get a view 
> from SoftValueSortedMap#internalMap. Once we operate 
> on this view map(like add/delete) in other threads, a concurrency issue may 
> occur.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5094) The META can hold an entry for a region with a different server name from the one actually in the AssignmentManager thus making the region unaccessible.

2011-12-25 Thread ramkrishna.s.vasudevan (Created) (JIRA)
The META can hold an entry for a region with a different server name from the 
one actually in the AssignmentManager thus making the region unaccessible.


 Key: HBASE-5094
 URL: https://issues.apache.org/jira/browse/HBASE-5094
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


R1 is reassigned to RS3 during RS1 shutdown, even though R1 was just assigned 
to RS2 by load balancer. So .META. table indicated R1 is on RS3. Both RS2 and 
RS3 think they have R1. Later when RS3 shutdown, R1 is reassigned to RS2. RS2 
will indicate ALREADY_OPENED. Thus the region is considered assigned to RS2 
even though .META. indicates it is on RS3.

T1: Load balancer tried to move R1 from RS1 to RS2
. 2011-11-21 14:03:20,812 INFO org.apache.hadoop.hbase.master.HMaster: balance 
hri=tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.,
 src=skynet-1,60020,1321912978281, dest=skynet-4,60020,1321912999305

T2: RS1 shutdown. 2011-11-21 14:03:24,759 DEBUG 
org.apache.hadoop.hbase.master.ServerManager: 
Added=skynet-1,60020,1321912978281 to dead servers, submitted shutdown handler 
to be executed, root=false, meta=true

T3:R1 is opened on RS2. 2011-11-21 14:03:26,131 DEBUG 
org.apache.hadoop.hbase.master.handler.OpenedRegionHandler: The master has 
opened the region 
tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.
 that was online on skynet-4,60020,1321912999305

T4: As part of RS1 shutdown handling, region reassignment starts. It uses the 
region location captured at T2. 2011-11-21 14:03:26,152 INFO 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler: Reassigning 32 
region(s) that skynet-1,60020,1321912978281 was carrying (skipping 0 regions(s) 
that are already in transition)

T5: R1 is assigned to RS3. 2011-11-21 14:03:27,404 DEBUG 
org.apache.hadoop.hbase.zookeeper.ZKUtil: master:6-0x133b84f9f49 
Retrieved 115 byte(s) of data from znode 
/hbase/unassigned/ee2e205a60f1bb06cc73bc9df06289df; 
data=region=tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.,
 origin=skynet-3,60020,1321912991430, state=RS_ZK_REGION_OPENED

T6: RS3 shutdown. R1 is reassigned to RS2. 2011-11-21 14:03:37,899 DEBUG 
org.apache.hadoop.hbase.master.AssignmentManager: ALREADY_OPENED region 
tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.
 to skynet-4,60020,1321912999305

>From AssignmentManager point of view, the R1 is assigned to RS2. The .META. 
>table indicates the location is RS3.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5094) The META can hold an entry for a region with a different server name from the one actually in the AssignmentManager thus making the region inaccessible.

2011-12-25 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5094:
--

Summary: The META can hold an entry for a region with a different server 
name from the one actually in the AssignmentManager thus making the region 
inaccessible.  (was: The META can hold an entry for a region with a different 
server name from the one actually in the AssignmentManager thus making the 
region unaccessible.)

> The META can hold an entry for a region with a different server name from the 
> one actually in the AssignmentManager thus making the region inaccessible.
> 
>
> Key: HBASE-5094
> URL: https://issues.apache.org/jira/browse/HBASE-5094
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> R1 is reassigned to RS3 during RS1 shutdown, even though R1 was just assigned 
> to RS2 by load balancer. So .META. table indicated R1 is on RS3. Both RS2 and 
> RS3 think they have R1. Later when RS3 shutdown, R1 is reassigned to RS2. RS2 
> will indicate ALREADY_OPENED. Thus the region is considered assigned to RS2 
> even though .META. indicates it is on RS3.
> T1: Load balancer tried to move R1 from RS1 to RS2
> . 2011-11-21 14:03:20,812 INFO org.apache.hadoop.hbase.master.HMaster: 
> balance 
> hri=tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.,
>  src=skynet-1,60020,1321912978281, dest=skynet-4,60020,1321912999305
> T2: RS1 shutdown. 2011-11-21 14:03:24,759 DEBUG 
> org.apache.hadoop.hbase.master.ServerManager: 
> Added=skynet-1,60020,1321912978281 to dead servers, submitted shutdown 
> handler to be executed, root=false, meta=true
> T3:R1 is opened on RS2. 2011-11-21 14:03:26,131 DEBUG 
> org.apache.hadoop.hbase.master.handler.OpenedRegionHandler: The master has 
> opened the region 
> tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.
>  that was online on skynet-4,60020,1321912999305
> T4: As part of RS1 shutdown handling, region reassignment starts. It uses the 
> region location captured at T2. 2011-11-21 14:03:26,152 INFO 
> org.apache.hadoop.hbase.master.handler.ServerShutdownHandler: Reassigning 32 
> region(s) that skynet-1,60020,1321912978281 was carrying (skipping 0 
> regions(s) that are already in transition)
> T5: R1 is assigned to RS3. 2011-11-21 14:03:27,404 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZKUtil: master:6-0x133b84f9f49 
> Retrieved 115 byte(s) of data from znode 
> /hbase/unassigned/ee2e205a60f1bb06cc73bc9df06289df; 
> data=region=tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.,
>  origin=skynet-3,60020,1321912991430, state=RS_ZK_REGION_OPENED
> T6: RS3 shutdown. R1 is reassigned to RS2. 2011-11-21 14:03:37,899 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: ALREADY_OPENED region 
> tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.
>  to skynet-4,60020,1321912999305
> From AssignmentManager point of view, the R1 is assigned to RS2. The .META. 
> table indicates the location is RS3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5094) The META can hold an entry for a region with a different server name from the one actually in the AssignmentManager thus making the region inaccessible.

2011-12-25 Thread ramkrishna.s.vasudevan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175899#comment-13175899
 ] 

ramkrishna.s.vasudevan commented on HBASE-5094:
---

{code}
RegionState rit = 
this.services.getAssignmentManager().isRegionInTransition(e.getKey());
ServerName addressFromAM = this.services.getAssignmentManager()
.getRegionServerOfRegion(e.getKey());
if (rit != null && !rit.isClosing() && !rit.isPendingClose()) {
  // Skip regions that were in transition unless CLOSING or
  // PENDING_CLOSE
  LOG.info("Skip assigning region " + rit.toString());
} else if (addressFromAM != null
&& !addressFromAM.equals(this.serverName)) {
  LOG.debug("Skip assigning region "
+ e.getKey().getRegionNameAsString()
+ " because it has been opened in "
+ addressFromAM.getServerName());
  }
{code}
In ServerShutDownHandler we try to get the address in the AM.  This address is 
initially null because it is not yet updated after the region was opened .i.e. 
the CAll back after node deletion is not yet done in the master side.
But removal from RIT is completed on the master side.  So this will trigger a 
new assignment.
So there is a small window between the online region is actually added in to 
the online list and the ServerShutdownHandler where we check the existing 
address in AM.

> The META can hold an entry for a region with a different server name from the 
> one actually in the AssignmentManager thus making the region inaccessible.
> 
>
> Key: HBASE-5094
> URL: https://issues.apache.org/jira/browse/HBASE-5094
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> R1 is reassigned to RS3 during RS1 shutdown, even though R1 was just assigned 
> to RS2 by load balancer. So .META. table indicated R1 is on RS3. Both RS2 and 
> RS3 think they have R1. Later when RS3 shutdown, R1 is reassigned to RS2. RS2 
> will indicate ALREADY_OPENED. Thus the region is considered assigned to RS2 
> even though .META. indicates it is on RS3.
> T1: Load balancer tried to move R1 from RS1 to RS2
> . 2011-11-21 14:03:20,812 INFO org.apache.hadoop.hbase.master.HMaster: 
> balance 
> hri=tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.,
>  src=skynet-1,60020,1321912978281, dest=skynet-4,60020,1321912999305
> T2: RS1 shutdown. 2011-11-21 14:03:24,759 DEBUG 
> org.apache.hadoop.hbase.master.ServerManager: 
> Added=skynet-1,60020,1321912978281 to dead servers, submitted shutdown 
> handler to be executed, root=false, meta=true
> T3:R1 is opened on RS2. 2011-11-21 14:03:26,131 DEBUG 
> org.apache.hadoop.hbase.master.handler.OpenedRegionHandler: The master has 
> opened the region 
> tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.
>  that was online on skynet-4,60020,1321912999305
> T4: As part of RS1 shutdown handling, region reassignment starts. It uses the 
> region location captured at T2. 2011-11-21 14:03:26,152 INFO 
> org.apache.hadoop.hbase.master.handler.ServerShutdownHandler: Reassigning 32 
> region(s) that skynet-1,60020,1321912978281 was carrying (skipping 0 
> regions(s) that are already in transition)
> T5: R1 is assigned to RS3. 2011-11-21 14:03:27,404 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZKUtil: master:6-0x133b84f9f49 
> Retrieved 115 byte(s) of data from znode 
> /hbase/unassigned/ee2e205a60f1bb06cc73bc9df06289df; 
> data=region=tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.,
>  origin=skynet-3,60020,1321912991430, state=RS_ZK_REGION_OPENED
> T6: RS3 shutdown. R1 is reassigned to RS2. 2011-11-21 14:03:37,899 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: ALREADY_OPENED region 
> tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.
>  to skynet-4,60020,1321912999305
> From AssignmentManager point of view, the R1 is assigned to RS2. The .META. 
> table indicates the location is RS3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5094) The META can hold an entry for a region with a different server name from the one actually in the AssignmentManager thus making the region inaccessible.

2011-12-25 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5094:
--

Reporter: ramkrishna.s.vasudevan  (was: Ted Yu)

> The META can hold an entry for a region with a different server name from the 
> one actually in the AssignmentManager thus making the region inaccessible.
> 
>
> Key: HBASE-5094
> URL: https://issues.apache.org/jira/browse/HBASE-5094
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>
> R1 is reassigned to RS3 during RS1 shutdown, even though R1 was just assigned 
> to RS2 by load balancer. So .META. table indicated R1 is on RS3. Both RS2 and 
> RS3 think they have R1. Later when RS3 shutdown, R1 is reassigned to RS2. RS2 
> will indicate ALREADY_OPENED. Thus the region is considered assigned to RS2 
> even though .META. indicates it is on RS3.
> T1: Load balancer tried to move R1 from RS1 to RS2
> . 2011-11-21 14:03:20,812 INFO org.apache.hadoop.hbase.master.HMaster: 
> balance 
> hri=tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.,
>  src=skynet-1,60020,1321912978281, dest=skynet-4,60020,1321912999305
> T2: RS1 shutdown. 2011-11-21 14:03:24,759 DEBUG 
> org.apache.hadoop.hbase.master.ServerManager: 
> Added=skynet-1,60020,1321912978281 to dead servers, submitted shutdown 
> handler to be executed, root=false, meta=true
> T3:R1 is opened on RS2. 2011-11-21 14:03:26,131 DEBUG 
> org.apache.hadoop.hbase.master.handler.OpenedRegionHandler: The master has 
> opened the region 
> tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.
>  that was online on skynet-4,60020,1321912999305
> T4: As part of RS1 shutdown handling, region reassignment starts. It uses the 
> region location captured at T2. 2011-11-21 14:03:26,152 INFO 
> org.apache.hadoop.hbase.master.handler.ServerShutdownHandler: Reassigning 32 
> region(s) that skynet-1,60020,1321912978281 was carrying (skipping 0 
> regions(s) that are already in transition)
> T5: R1 is assigned to RS3. 2011-11-21 14:03:27,404 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZKUtil: master:6-0x133b84f9f49 
> Retrieved 115 byte(s) of data from znode 
> /hbase/unassigned/ee2e205a60f1bb06cc73bc9df06289df; 
> data=region=tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.,
>  origin=skynet-3,60020,1321912991430, state=RS_ZK_REGION_OPENED
> T6: RS3 shutdown. R1 is reassigned to RS2. 2011-11-21 14:03:37,899 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: ALREADY_OPENED region 
> tableXY,\xB8Q\xEB\x85\x1E\xB8Q\xDF,1321573099841.ee2e205a60f1bb06cc73bc9df06289df.
>  to skynet-4,60020,1321912999305
> From AssignmentManager point of view, the R1 is assigned to RS2. The .META. 
> table indicates the location is RS3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira