[jira] [Commented] (HBASE-6580) Deprecate HTablePool in favor of HConnection.getTable(...)

2013-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729128#comment-13729128
 ] 

Hadoop QA commented on HBASE-6580:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12595855/6580-trunk-v4.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 4 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6604//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6604//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6604//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6604//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6604//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6604//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6604//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6604//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6604//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6604//console

This message is automatically generated.

> Deprecate HTablePool in favor of HConnection.getTable(...)
> --
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 6580-trunk.txt, 6580-trunk-v2.txt, 6580-trunk-v3.txt, 
> 6580-trunk-v4.txt, HBASE-6580_v1.patch, HBASE-6580_v2.patch
>
>
> Update:
> I now propose deprecating HTablePool and instead introduce a getTable method 
> on HConnection and allow HConnection to manage the ThreadPool.
> Initial proposal:
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6580) Deprecate HTablePool in favor of HConnection.getTable(...)

2013-08-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6580:
-

Attachment: 6580-trunk-v4.txt

Meh... This one should pass all tests.

> Deprecate HTablePool in favor of HConnection.getTable(...)
> --
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 6580-trunk.txt, 6580-trunk-v2.txt, 6580-trunk-v3.txt, 
> 6580-trunk-v4.txt, HBASE-6580_v1.patch, HBASE-6580_v2.patch
>
>
> Update:
> I now propose deprecating HTablePool and instead introduce a getTable method 
> on HConnection and allow HConnection to manage the ThreadPool.
> Initial proposal:
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9091) Update ByteRange to maintain consumer's position

2013-08-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729102#comment-13729102
 ] 

stack commented on HBASE-9091:
--

[~ndimiduk] Just trying to keep it simple:

The below:

+  public T decode(byte[] buff, int offset);

becomes

+  public T decode(byte[] buff, Int offset);

The Int (Or MutableInteger) doesn't have to hold a volatile int; each thread 
can pass in their own instance.

Then ByteRange goes undisturbed.

Having to scan the byte array twice -- once to deserialize and then again to 
figure where to start the next deserialization -- is how they used do it on 
flintstones computers.

> Update ByteRange to maintain consumer's position
> 
>
> Key: HBASE-9091
> URL: https://issues.apache.org/jira/browse/HBASE-9091
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 0001-HBASE-9091-Extend-ByteRange.patch, 
> 0001-HBASE-9091-Extend-ByteRange.patch
>
>
> ByteRange is a useful alternative to Java's ByteBuffer. Notably, it is 
> mutable and an instance can be assigned over a byte[] after instantiation. 
> This is valuable as a performance consideration when working with byte[] 
> slices in a tight loop. Its current design is such that it is not possible to 
> consume a portion of the range while performing activities like decoding an 
> object without altering the definition of the range. It should provide a 
> position that is independent from the range's offset and length to make 
> partial reads easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6580) Deprecate HTablePool in favor of HConnection.getTable(...)

2013-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729101#comment-13729101
 ] 

Hadoop QA commented on HBASE-6580:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12595845/6580-trunk-v3.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 4 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor
  org.apache.hadoop.hbase.client.TestFromClientSide

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6603//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6603//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6603//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6603//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6603//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6603//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6603//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6603//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6603//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6603//console

This message is automatically generated.

> Deprecate HTablePool in favor of HConnection.getTable(...)
> --
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 6580-trunk.txt, 6580-trunk-v2.txt, 6580-trunk-v3.txt, 
> HBASE-6580_v1.patch, HBASE-6580_v2.patch
>
>
> Update:
> I now propose deprecating HTablePool and instead introduce a getTable method 
> on HConnection and allow HConnection to manage the ThreadPool.
> Initial proposal:
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7525) A canary monitoring program specifically for regionserver

2013-08-04 Thread takeshi.miao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729100#comment-13729100
 ] 

takeshi.miao commented on HBASE-7525:
-

Dear [~stack] & [~mbertozzi]

I am wondering how do you think about this ticket ?

> A canary monitoring program specifically for regionserver
> -
>
> Key: HBASE-7525
> URL: https://issues.apache.org/jira/browse/HBASE-7525
> Project: HBase
>  Issue Type: New Feature
>  Components: monitoring
>Affects Versions: 0.94.0
>Reporter: takeshi.miao
>Priority: Minor
> Fix For: 0.95.0
>
> Attachments: HBASE-7525-0.95-v0.patch, HBASE-7525-0.95-v1.patch, 
> HBASE-7525-0.95-v3.patch, HBASE-7525-v0.patch, RegionServerCanary.java
>
>
> *Motivation*
> This ticket is to provide a canary monitoring tool specifically for 
> HRegionserver, details as follows
> 1. This tool is required by operation team due to they thought that the 
> canary for each region of a HBase is too many for them, so I implemented this 
> coarse-granular one based on the original o.a.h.h.tool.Canary for them
> 2. And this tool is implemented by multi-threading, which means the each Get 
> request sent by a thread. the reason I use this way is due to we suffered the 
> region server hung issue by now the root cause is still not clear. so this 
> tool can help operation team to detect hung region server if any.
> *example*
> 1. the tool docs
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary -help
> Usage: [opts] [regionServerName 1 [regionServrName 2...]]
>  regionServerName - FQDN serverName, can use linux command:hostname -f to 
> check your serverName
>  where [-opts] are:
>-help Show this help and exit.
>-eUse regionServerName as regular expression
>   which means the regionServerName is regular expression pattern
>-f  stop whole program if first error occurs, default is true
>-t  timeout for a check, default is 60 (milisecs)
>-daemonContinuous check at defined intervals.
>-interval   Interval between checks (sec)
> 2. Will send a request to each regionserver in a HBase cluster
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary
> 3. Will send a request to a regionserver by given name
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary rs1.domainname
> 4. Will send a request to regionserver(s) by given regular-expression
> /opt/trend/circus-opstool/bin/hbase-canary-monitor-each-regionserver.sh -e 
> rs1.domainname.pattern
> // another example
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary -e 
> tw-poc-tm-puppet-hdn[0-9]\{1,2\}.client.tw.trendnet.org
> 5. Will send a request to a regionserver and also set a timeout limit for 
> this test
> // query regionserver:rs1.domainname with timeout limit 10sec
> // -f false, means that will not exit this program even test failed
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary -f false -t 1 
> rs1.domainname
> // echo "1" if timeout
> echo "$?"
> 6. Will run as daemon mode, which means it will send request to each 
> regionserver periodically
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary -daemon

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8408) Implement namespace

2013-08-04 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729099#comment-13729099
 ] 

Francis Liu commented on HBASE-8408:


In that case yes. The patch is up and a lot smaller waiting for review. We've 
been running it internally. It needs a rebase and the migration code. I'll take 
a look tomorrow. 

> Implement namespace
> ---
>
> Key: HBASE-8408
> URL: https://issues.apache.org/jira/browse/HBASE-8408
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-8015_11.patch, HBASE-8015_12.patch, 
> HBASE-8015_13.patch, HBASE-8015_14.patch, HBASE-8015_15.patch, 
> HBASE-8015_16.patch, HBASE-8015_1.patch, HBASE-8015_2.patch, 
> HBASE-8015_3.patch, HBASE-8015_4.patch, HBASE-8015_5.patch, 
> HBASE-8015_6.patch, HBASE-8015_7.patch, HBASE-8015_8.patch, 
> HBASE-8015_9.patch, HBASE-8015.patch, TestNamespaceMigration.tgz, 
> TestNamespaceUpgrade.tgz
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9091) Update ByteRange to maintain consumer's position

2013-08-04 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729092#comment-13729092
 ] 

Matt Corgan commented on HBASE-9091:


Sorry - i'm thinking of 2 separate scenarios:

1) Long-lived ByteRanges, such as wrapping a block cache block that may be read 
simultaneously by multiple threads.  Here I'd argue against the position field 
since separate reader threads will each want their own position.

2) High speed, single threaded reuse, such as in prefix-tree encoding where the 
ByteRange will be remapped frequently.  This is where the volatiles will hurt.

Maybe ByteRange should just be an interface with these different concerns 
addressed in different implementations.  The sub-classing could introduce a 
small performance cost, but it's probably not too bad to begin with, and a lot 
of uses will get inlined by the compiler anyway.

> Update ByteRange to maintain consumer's position
> 
>
> Key: HBASE-9091
> URL: https://issues.apache.org/jira/browse/HBASE-9091
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 0001-HBASE-9091-Extend-ByteRange.patch, 
> 0001-HBASE-9091-Extend-ByteRange.patch
>
>
> ByteRange is a useful alternative to Java's ByteBuffer. Notably, it is 
> mutable and an instance can be assigned over a byte[] after instantiation. 
> This is valuable as a performance consideration when working with byte[] 
> slices in a tight loop. Its current design is such that it is not possible to 
> consume a portion of the range while performing activities like decoding an 
> object without altering the definition of the range. It should provide a 
> position that is independent from the range's offset and length to make 
> partial reads easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8201) OrderedBytes: an ordered encoding strategy

2013-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729083#comment-13729083
 ] 

Hadoop QA commented on HBASE-8201:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12595842/0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6602//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6602//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6602//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6602//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6602//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6602//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6602//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6602//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6602//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6602//console

This message is automatically generated.

> OrderedBytes: an ordered encoding strategy
> --
>
> Key: HBASE-8201
> URL: https://issues.apache.org/jira/browse/HBASE-8201
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.95.2
>
> Attachments: 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch
>
>
> Once the spec is agreed upon, it must be implemented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729080#comment-13729080
 ] 

Hudson commented on HBASE-9115:
---

SUCCESS: Integrated in HBase-TRUNK #4341 (See 
[https://builds.apache.org/job/HBase-TRUNK/4341/])
HBASE-9115 Addendum for server side fix (Ted Yu and Lars) (tedyu: rev 1510355)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.add, 9115-0.94.txt, 9115-0.94-v2.txt, 
> 9115-trunk.addendum, 9115-trunk.addendum2, 9115-trunk.addendum3, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2

[jira] [Updated] (HBASE-6580) Deprecate HTablePool in favor of HConnection.getTable(...)

2013-08-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6580:
-

Attachment: 6580-trunk-v3.txt

Fixing TestClientNoCluster

> Deprecate HTablePool in favor of HConnection.getTable(...)
> --
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 6580-trunk.txt, 6580-trunk-v2.txt, 6580-trunk-v3.txt, 
> HBASE-6580_v1.patch, HBASE-6580_v2.patch
>
>
> Update:
> I now propose deprecating HTablePool and instead introduce a getTable method 
> on HConnection and allow HConnection to manage the ThreadPool.
> Initial proposal:
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-9115.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.add, 9115-0.94.txt, 9115-0.94-v2.txt, 
> 9115-trunk.addendum, 9115-trunk.addendum2, 9115-trunk.addendum3, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column2);
> byte [] resultForColumn3 = result.getValu

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729074#comment-13729074
 ] 

Hudson commented on HBASE-9115:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #653 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/653/])
HBASE-9115 Addendum for server side fix (Ted Yu and Lars) (tedyu: rev 1510355)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.add, 9115-0.94.txt, 9115-0.94-v2.txt, 
> 9115-trunk.addendum, 9115-trunk.addendum2, 9115-trunk.addendum3, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(B

[jira] [Commented] (HBASE-9091) Update ByteRange to maintain consumer's position

2013-08-04 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729070#comment-13729070
 ] 

Nick Dimiduk commented on HBASE-9091:
-

[~stack] AtomicLong will work, though I think it would be ugly in the API -- 
there's nothing atomic about the use-case. I'd prefer a position-tracking 
subclass of {{ByteRange}} over implementing a mutable Integer class just for 
this.

> Update ByteRange to maintain consumer's position
> 
>
> Key: HBASE-9091
> URL: https://issues.apache.org/jira/browse/HBASE-9091
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 0001-HBASE-9091-Extend-ByteRange.patch, 
> 0001-HBASE-9091-Extend-ByteRange.patch
>
>
> ByteRange is a useful alternative to Java's ByteBuffer. Notably, it is 
> mutable and an instance can be assigned over a byte[] after instantiation. 
> This is valuable as a performance consideration when working with byte[] 
> slices in a tight loop. Its current design is such that it is not possible to 
> consume a portion of the range while performing activities like decoding an 
> object without altering the definition of the range. It should provide a 
> position that is independent from the range's offset and length to make 
> partial reads easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8693) DataType: provide extensible type API

2013-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729071#comment-13729071
 ] 

Hadoop QA commented on HBASE-8693:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12595843/0001-HBASE-8693-Extensible-data-types-API.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 32 new 
or modified tests.

{color:red}-1 hadoop1.0{color}.  The patch failed to compile against the 
hadoop 1.0 profile.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6601//console

This message is automatically generated.

> DataType: provide extensible type API
> -
>
> Key: HBASE-8693
> URL: https://issues.apache.org/jira/browse/HBASE-8693
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.95.2
>
> Attachments: 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0002-HBASE-8693-example-Use-DataType-API-to-build-regionN.patch, 
> KijiFormattedEntityId.java
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9091) Update ByteRange to maintain consumer's position

2013-08-04 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729069#comment-13729069
 ] 

Nick Dimiduk commented on HBASE-9091:
-

bq. since it will slow it down for all the single-threaded users (prefix-tree).

I thought the primary premise of complaint against adding the position feature 
was that the existing consumers of {{ByteRange}} assume a concurrent context.

> Update ByteRange to maintain consumer's position
> 
>
> Key: HBASE-9091
> URL: https://issues.apache.org/jira/browse/HBASE-9091
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 0001-HBASE-9091-Extend-ByteRange.patch, 
> 0001-HBASE-9091-Extend-ByteRange.patch
>
>
> ByteRange is a useful alternative to Java's ByteBuffer. Notably, it is 
> mutable and an instance can be assigned over a byte[] after instantiation. 
> This is valuable as a performance consideration when working with byte[] 
> slices in a tight loop. Its current design is such that it is not possible to 
> consume a portion of the range while performing activities like decoding an 
> object without altering the definition of the range. It should provide a 
> position that is independent from the range's offset and length to make 
> partial reads easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8693) DataType: provide extensible type API

2013-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-8693:


Attachment: 0001-HBASE-8693-Extensible-data-types-API.patch

Address reviewer comments from Stack and James.

> DataType: provide extensible type API
> -
>
> Key: HBASE-8693
> URL: https://issues.apache.org/jira/browse/HBASE-8693
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.95.2
>
> Attachments: 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0001-HBASE-8693-Extensible-data-types-API.patch, 
> 0002-HBASE-8693-example-Use-DataType-API-to-build-regionN.patch, 
> KijiFormattedEntityId.java
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8201) OrderedBytes: an ordered encoding strategy

2013-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-8201:


Attachment: 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch

Fix javadoc warnings.

> OrderedBytes: an ordered encoding strategy
> --
>
> Key: HBASE-8201
> URL: https://issues.apache.org/jira/browse/HBASE-8201
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.95.2
>
> Attachments: 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch
>
>
> Once the spec is agreed upon, it must be implemented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729066#comment-13729066
 ] 

Hudson commented on HBASE-9115:
---

SUCCESS: Integrated in hbase-0.95 #403 (See 
[https://builds.apache.org/job/hbase-0.95/403/])
HBASE-9115 Addendum for server side fix (Ted Yu and Lars) (tedyu: rev 1510356)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.add, 9115-0.94.txt, 9115-0.94-v2.txt, 
> 9115-trunk.addendum, 9115-trunk.addendum2, 9115-trunk.addendum3, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFami

[jira] [Commented] (HBASE-9091) Update ByteRange to maintain consumer's position

2013-08-04 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729057#comment-13729057
 ] 

Matt Corgan commented on HBASE-9091:


{quote}volatile does make sense from a concurrent access perspective.{quote}
I think that would also merit a separate class since it will slow it down for 
all the single-threaded users (prefix-tree).  Stepping back, I don't know that 
it needs to be volatile anyway until we have a multi-threaded use case?

> Update ByteRange to maintain consumer's position
> 
>
> Key: HBASE-9091
> URL: https://issues.apache.org/jira/browse/HBASE-9091
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 0001-HBASE-9091-Extend-ByteRange.patch, 
> 0001-HBASE-9091-Extend-ByteRange.patch
>
>
> ByteRange is a useful alternative to Java's ByteBuffer. Notably, it is 
> mutable and an instance can be assigned over a byte[] after instantiation. 
> This is valuable as a performance consideration when working with byte[] 
> slices in a tight loop. Its current design is such that it is not possible to 
> consume a portion of the range while performing activities like decoding an 
> object without altering the definition of the range. It should provide a 
> position that is independent from the range's offset and length to make 
> partial reads easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8408) Implement namespace

2013-08-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729056#comment-13729056
 ] 

stack commented on HBASE-8408:
--

[~toffer] Doubt we can launch 0.96 if it can't be secure.  Would that make 
hbase-8409 a blocker on 0.96?

> Implement namespace
> ---
>
> Key: HBASE-8408
> URL: https://issues.apache.org/jira/browse/HBASE-8408
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-8015_11.patch, HBASE-8015_12.patch, 
> HBASE-8015_13.patch, HBASE-8015_14.patch, HBASE-8015_15.patch, 
> HBASE-8015_16.patch, HBASE-8015_1.patch, HBASE-8015_2.patch, 
> HBASE-8015_3.patch, HBASE-8015_4.patch, HBASE-8015_5.patch, 
> HBASE-8015_6.patch, HBASE-8015_7.patch, HBASE-8015_8.patch, 
> HBASE-8015_9.patch, HBASE-8015.patch, TestNamespaceMigration.tgz, 
> TestNamespaceUpgrade.tgz
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes

2013-08-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729055#comment-13729055
 ] 

stack commented on HBASE-8960:
--

[~jeffreyz] So could your tool be improved so that if a test has been present 
in the previous three builds and not in the fouth (caveat missing tests is less 
than some threshold -- say ten or so), could it print an error?  Thanks

> TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
> --
>
> Key: HBASE-8960
> URL: https://issues.apache.org/jira/browse/HBASE-8960
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.95.2
>
> Attachments: hbase-8960-addendum-2.patch, hbase-8960-addendum.patch, 
> hbase-8960.patch
>
>
> http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/
> {noformat}
> java.lang.AssertionError: expected:<1000> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9054) HBaseAdmin#isTableDisabled() should check table existence before checking zk state.

2013-08-04 Thread Bene Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729053#comment-13729053
 ] 

Bene Guo commented on HBASE-9054:
-

[~lhofhansl]I will submit a patch today.

> HBaseAdmin#isTableDisabled() should check table existence before checking zk 
> state. 
> 
>
> Key: HBASE-9054
> URL: https://issues.apache.org/jira/browse/HBASE-9054
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Affects Versions: 0.94.10
>Reporter: Bene Guo
> Fix For: 0.94.12
>
>
> To avoid compatibility issues with older versions HBaseAdmin#isTableDisabled 
> and HBaseAdmin#isTableEnabled()(The HBASE-8538 fix isTableEnabled.) returning 
> true even if the table state is null. Its also returning true even a table is 
> not present. We should confirm table existence from .META. before checking in 
> zk. If table not present or deleted, then It will throw 
> TableNotFoundException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9091) Update ByteRange to maintain consumer's position

2013-08-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729052#comment-13729052
 ] 

stack commented on HBASE-9091:
--

AtomicLong or make a mutable Integer?

> Update ByteRange to maintain consumer's position
> 
>
> Key: HBASE-9091
> URL: https://issues.apache.org/jira/browse/HBASE-9091
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 0001-HBASE-9091-Extend-ByteRange.patch, 
> 0001-HBASE-9091-Extend-ByteRange.patch
>
>
> ByteRange is a useful alternative to Java's ByteBuffer. Notably, it is 
> mutable and an instance can be assigned over a byte[] after instantiation. 
> This is valuable as a performance consideration when working with byte[] 
> slices in a tight loop. Its current design is such that it is not possible to 
> consume a portion of the range while performing activities like decoding an 
> object without altering the definition of the range. It should provide a 
> position that is independent from the range's offset and length to make 
> partial reads easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729046#comment-13729046
 ] 

Hudson commented on HBASE-9115:
---

FAILURE: Integrated in HBase-0.94 #1095 (See 
[https://builds.apache.org/job/HBase-0.94/1095/])
HBASE-9115 Addendum (Ted and Lars) (tedyu: rev 1510358)
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Append.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.add, 9115-0.94.txt, 9115-0.94-v2.txt, 
> 9115-trunk.addendum, 9115-trunk.addendum2, 9115-trunk.addendum3, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> 

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729044#comment-13729044
 ] 

Hudson commented on HBASE-9115:
---

SUCCESS: Integrated in HBase-0.94-security #246 (See 
[https://builds.apache.org/job/HBase-0.94-security/246/])
HBASE-9115 Addendum (Ted and Lars) (tedyu: rev 1510358)
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Append.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.add, 9115-0.94.txt, 9115-0.94-v2.txt, 
> 9115-trunk.addendum, 9115-trunk.addendum2, 9115-trunk.addendum3, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part

[jira] [Commented] (HBASE-6580) Deprecate HTablePool in favor of HConnection.getTable(...)

2013-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729037#comment-13729037
 ] 

Hadoop QA commented on HBASE-6580:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12595834/6580-trunk-v2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 4 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestClientNoCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6600//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6600//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6600//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6600//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6600//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6600//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6600//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6600//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6600//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6600//console

This message is automatically generated.

> Deprecate HTablePool in favor of HConnection.getTable(...)
> --
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 6580-trunk.txt, 6580-trunk-v2.txt, HBASE-6580_v1.patch, 
> HBASE-6580_v2.patch
>
>
> Update:
> I now propose deprecating HTablePool and instead introduce a getTable method 
> on HConnection and allow HConnection to manage the ThreadPool.
> Initial proposal:
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8201) OrderedBytes: an ordered encoding strategy

2013-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729033#comment-13729033
 ] 

Hadoop QA commented on HBASE-8201:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12595833/0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6599//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6599//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6599//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6599//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6599//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6599//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6599//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6599//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6599//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6599//console

This message is automatically generated.

> OrderedBytes: an ordered encoding strategy
> --
>
> Key: HBASE-8201
> URL: https://issues.apache.org/jira/browse/HBASE-8201
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.95.2
>
> Attachments: 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch
>
>
> Once the spec is agreed upon, it must be implemented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729032#comment-13729032
 ] 

Hudson commented on HBASE-9115:
---

FAILURE: Integrated in hbase-0.95-on-hadoop2 #218 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/218/])
HBASE-9115 Addendum for server side fix (Ted Yu and Lars) (tedyu: rev 1510356)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.add, 9115-0.94.txt, 9115-0.94-v2.txt, 
> 9115-trunk.addendum, 9115-trunk.addendum2, 9115-trunk.addendum3, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.a

[jira] [Updated] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9115:
--

Attachment: 9115-0.94.add

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.add, 9115-0.94.txt, 9115-0.94-v2.txt, 
> 9115-trunk.addendum, 9115-trunk.addendum2, 9115-trunk.addendum3, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column2);
> byte [] resultForColumn3 = result.getValue(Bytes.toBytes(cFamily

[jira] [Updated] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9115:
--

Status: Open  (was: Patch Available)

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.add, 9115-0.94.txt, 9115-0.94-v2.txt, 
> 9115-trunk.addendum, 9115-trunk.addendum2, 9115-trunk.addendum3, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column2);
> byte [] resultForColumn3 = result.getValue(Bytes.toBy

[jira] [Commented] (HBASE-8752) Backport HBASE-6466 to 0.94

2013-08-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729023#comment-13729023
 ] 

Lars Hofhansl commented on HBASE-8752:
--

I want to get this into 0.94, but I need some hard numbers from somebody with a 
workload where this has a benefit in order to justify the change in 0.94.


> Backport HBASE-6466 to 0.94
> ---
>
> Key: HBASE-8752
> URL: https://issues.apache.org/jira/browse/HBASE-8752
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.94.8
>Reporter: Richard Ding
>Assignee: Richard Ding
>Priority: Minor
> Fix For: 0.94.12
>
> Attachments: HBASE-8752.patch
>
>
> 0.94 already supports multi-thread compaction. It will be good it also 
> supports multi-thread memstore flush, so that users can tune the number of 
> threads for both compaction and flushing when running a heavy-write load.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6580) Deprecate HTablePool in favor of HConnection.getTable(...)

2013-08-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6580:
-

Attachment: 6580-trunk-v2.txt

Updated patch. Previous version did not make the required constructor 
accessible.

Also modified Javadoc some and added basic section to book.xml.
(I expect the JavaDoc/book.xml updated further in trunk when all the connection 
caching nonsense is removed)

> Deprecate HTablePool in favor of HConnection.getTable(...)
> --
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 6580-trunk.txt, 6580-trunk-v2.txt, HBASE-6580_v1.patch, 
> HBASE-6580_v2.patch
>
>
> Update:
> I now propose deprecating HTablePool and instead introduce a getTable method 
> on HConnection and allow HConnection to manage the ThreadPool.
> Initial proposal:
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8201) OrderedBytes: an ordered encoding strategy

2013-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-8201:


Attachment: 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch

Addressing more reviewer comments.

- fixes the long -> int cast
- removes Numeric helper class, leaving decoded object instantiation up to 
clients
- removes conditional logic from Order, adds tests
- beefs up documentation on OrderedBytes, and corrects handling of 0x00 byte in 
BlobCopy ASCENDING
- adds helper functions for inspecting the nature of encoded values


> OrderedBytes: an ordered encoding strategy
> --
>
> Key: HBASE-8201
> URL: https://issues.apache.org/jira/browse/HBASE-8201
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.95.2
>
> Attachments: 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 
> 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch
>
>
> Once the spec is agreed upon, it must be implemented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9123) Filter protobuf generated code from long line warning

2013-08-04 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729016#comment-13729016
 ] 

Jesse Yates commented on HBASE-9123:


+1 LGTM

> Filter protobuf generated code from long line warning
> -
>
> Key: HBASE-9123
> URL: https://issues.apache.org/jira/browse/HBASE-9123
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.98.0
>
> Attachments: 9123.patch, 9123-v2.patch
>
>
> For big patch, such as the one for namespace, there would be many changes in 
> the protobuf generated code.
> See example here: 
> https://builds.apache.org/job/PreCommit-HBASE-Build/6569/console
> We should filter protobuf generated code from long line warning

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8408) Implement namespace

2013-08-04 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729007#comment-13729007
 ] 

Francis Liu commented on HBASE-8408:


Thanks. 

BTW was thinking about security, we probably shouldn't allow secure deployments 
to run with a 0.96 release without the namespace security work (HBASE-8409). 
Should I add code to prevent this from happening till HBASE-8409 goes in?


> Implement namespace
> ---
>
> Key: HBASE-8408
> URL: https://issues.apache.org/jira/browse/HBASE-8408
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-8015_11.patch, HBASE-8015_12.patch, 
> HBASE-8015_13.patch, HBASE-8015_14.patch, HBASE-8015_15.patch, 
> HBASE-8015_16.patch, HBASE-8015_1.patch, HBASE-8015_2.patch, 
> HBASE-8015_3.patch, HBASE-8015_4.patch, HBASE-8015_5.patch, 
> HBASE-8015_6.patch, HBASE-8015_7.patch, HBASE-8015_8.patch, 
> HBASE-8015_9.patch, HBASE-8015.patch, TestNamespaceMigration.tgz, 
> TestNamespaceUpgrade.tgz
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729004#comment-13729004
 ] 

Hadoop QA commented on HBASE-9115:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12595827/9115-trunk.addendum3
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6598//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6598//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6598//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6598//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6598//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6598//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6598//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6598//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6598//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6598//console

This message is automatically generated.

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.addendum2, 9115-trunk.addendum3, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, valu

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728999#comment-13728999
 ] 

Lars Hofhansl commented on HBASE-9115:
--

Thanks Ted and Stack.
Incidentally, the same is not quite correct for Increment (sorting happens by 
the Increment object not by the Store on the server, but let's fix that in 
another issue - if we even want to touch that).

+1 on addendum3.


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.addendum2, 9115-trunk.addendum3, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result r

[jira] [Commented] (HBASE-9099) logReplay could trigger double region assignment

2013-08-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728979#comment-13728979
 ] 

Ted Yu commented on HBASE-9099:
---

+1

> logReplay could trigger double region assignment
> 
>
> Key: HBASE-9099
> URL: https://issues.apache.org/jira/browse/HBASE-9099
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.95.2
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Attachments: hbase-9099.patch, hbase-9099-v1.patch
>
>
> The symptom is the first region assignment submitted in SSH is in progress 
> while when am.waitOnRegionToClearRegionsInTransition times out we will 
> re-submitted another SSH which will invoke another region assignment for the 
> region. It will cause the region get stuck in RIT status.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9115:
--

Attachment: 9115-trunk.addendum3

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.addendum2, 9115-trunk.addendum3, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column2);
> byte [] resultForColumn3 = result.getValue(Bytes.toBytes(cFamily), 
> colum

[jira] [Commented] (HBASE-8760) possible loss of data in snapshot taken after region split

2013-08-04 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728970#comment-13728970
 ] 

Jerry He commented on HBASE-8760:
-

And the population of the .META is based on the .regioninfo files which were 
carried over from the original table?
That makes sense.
Thanks.

> possible loss of data in snapshot taken after region split
> --
>
> Key: HBASE-8760
> URL: https://issues.apache.org/jira/browse/HBASE-8760
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.94.8, 0.95.1
>Reporter: Jerry He
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: HBase-8760-0.94.8.patch, HBase-8760-0.94.8-v1.patch, 
> HBASE-8760-thz-v0.patch, HBASE-8760-thz-v1.patch
>
>
> Right after a region split but before the daughter regions are compacted, we 
> have two daughter regions containing Reference files to the parent hfiles.
> If we take snapshot right at the moment, the snapshot will succeed, but it 
> will only contain the daughter Reference files. Since there is no hold on the 
> parent hfiles, they will be deleted by the HFile Cleaner after they are no 
> longer needed by the daughter regions soon after.
> A minimum we need to do is the keep these parent hfiles from being deleted. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8760) possible loss of data in snapshot taken after region split

2013-08-04 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728953#comment-13728953
 ] 

Matteo Bertozzi commented on HBASE-8760:


{code}
This is a meaningful change!
Parent region and daughter regions are all included in the snapshot. After 
restore/clone, all will be included and brought online?
{code}
the restore/clone will populate .META. in the same way as it is in the original 
table, so no. they will not be online. (with this patch, you end up with the 
same exact layout disk/meta of the original table, no offline missing)

> possible loss of data in snapshot taken after region split
> --
>
> Key: HBASE-8760
> URL: https://issues.apache.org/jira/browse/HBASE-8760
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.94.8, 0.95.1
>Reporter: Jerry He
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: HBase-8760-0.94.8.patch, HBase-8760-0.94.8-v1.patch, 
> HBASE-8760-thz-v0.patch, HBASE-8760-thz-v1.patch
>
>
> Right after a region split but before the daughter regions are compacted, we 
> have two daughter regions containing Reference files to the parent hfiles.
> If we take snapshot right at the moment, the snapshot will succeed, but it 
> will only contain the daughter Reference files. Since there is no hold on the 
> parent hfiles, they will be deleted by the HFile Cleaner after they are no 
> longer needed by the daughter regions soon after.
> A minimum we need to do is the keep these parent hfiles from being deleted. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8760) possible loss of data in snapshot taken after region split

2013-08-04 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728951#comment-13728951
 ] 

Jerry He commented on HBASE-8760:
-

This is a meaningful change!
Parent region and daughter regions are all included in the snapshot. After 
restore/clone, all will be included and brought online?

Code comments:
{code}
 snapshtoDisabledRegion(snapshotDir, regionInfo);
{code}
Typo in the method name.
{code}
public void verifyRegions(Path snapshotDir) throws IOException
{code}
==> private void verifyRegions(final Path snapshotDir) throws IOException
{code}
  private void verifyRegion(final FileSystem fs, final Path snapshotDir, final 
HRegionInfo region)
  throws IOException {
 // make sure we have region in the snapshot
{code}
That comment line is not indeeded anymore.


> possible loss of data in snapshot taken after region split
> --
>
> Key: HBASE-8760
> URL: https://issues.apache.org/jira/browse/HBASE-8760
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.94.8, 0.95.1
>Reporter: Jerry He
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: HBase-8760-0.94.8.patch, HBase-8760-0.94.8-v1.patch, 
> HBASE-8760-thz-v0.patch, HBASE-8760-thz-v1.patch
>
>
> Right after a region split but before the daughter regions are compacted, we 
> have two daughter regions containing Reference files to the parent hfiles.
> If we take snapshot right at the moment, the snapshot will succeed, but it 
> will only contain the daughter Reference files. Since there is no hold on the 
> parent hfiles, they will be deleted by the HFile Cleaner after they are no 
> longer needed by the daughter regions soon after.
> A minimum we need to do is the keep these parent hfiles from being deleted. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8496) Implement tags and the internals of how a tag should look like

2013-08-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728918#comment-13728918
 ] 

Ted Yu commented on HBASE-8496:
---

Skimmed through latest design doc.

bq. Pls note that we would not be persisting any tag related information on the 
HFileBlock.

bq. Based on the Encoding/Decoding context state the Encoder and decoding logic 
of the algos would handle tags.

Can you elaborate on the above a bit more ?

> Implement tags and the internals of how a tag should look like
> --
>
> Key: HBASE-8496
> URL: https://issues.apache.org/jira/browse/HBASE-8496
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.98.0, 0.95.2
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Attachments: Comparison.pdf, HBASE-8496_2.patch, HBASE-8496.patch, 
> Tag design.pdf, Tag design_updated.pdf, Tag_In_KV_Buffer_For_reference.patch
>
>
> The intent of this JIRA comes from HBASE-7897.
> This would help us to decide on the structure and format of how the tags 
> should look like. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-8881) TestGet failing in testDynamicFilter with AbstractMethodException

2013-08-04 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang resolved HBASE-8881.


Resolution: Fixed

The test was fixed and re-enabled in HBASE-8885.

> TestGet failing in testDynamicFilter with AbstractMethodException
> -
>
> Key: HBASE-8881
> URL: https://issues.apache.org/jira/browse/HBASE-8881
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.9
>Reporter: stack
>Assignee: stack
> Attachments: 8881.txt, ignore.txt
>
>
> See 
> https://builds.apache.org/job/HBase-0.94/1040/testReport/org.apache.hadoop.hbase.client/TestGet/testDynamicFilter/
> It has been happening in the last set of builds.  It does not seem related to 
> the checkin it started happening on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9098) During recovery use ZK as the source of truth for region state

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728878#comment-13728878
 ] 

Hudson commented on HBASE-9098:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #652 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/652/])
hbase-9098: During recovery use ZK as the source of truth for region state 
(jeffreyz: rev 1510101)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* /hbase/trunk/hbase-protocol/src/main/protobuf/hbase.proto
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java


> During recovery use ZK as the source of truth for region state 
> ---
>
> Key: HBASE-9098
> URL: https://issues.apache.org/jira/browse/HBASE-9098
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.95.0
>Reporter: Devaraj Das
>Assignee: Jeffrey Zhong
>Priority: Blocker
> Fix For: 0.95.2
>
> Attachments: hbase-9098.patch, hbase-9098-v1.patch
>
>
> In HLogSplitter:locateRegionAndRefreshLastFlushedSequenceId(HConnection, 
> byte[], byte[], String), we talk to the replayee regionserver to figure out 
> whether a region is in recovery or not. We should look at ZK only for this 
> piece of information (since that is the source of truth for recovery 
> otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9096) Disable split during log replay

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728877#comment-13728877
 ] 

Hudson commented on HBASE-9096:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #652 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/652/])
hbase-9096: Disable split during log replay (jeffreyz: rev 1510105)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Disable split during log replay
> ---
>
> Key: HBASE-9096
> URL: https://issues.apache.org/jira/browse/HBASE-9096
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Jeffrey Zhong
> Fix For: 0.98.0, 0.95.2
>
> Attachments: hbase-9096.patch
>
>
> When regions are allowed to take writes during recovery, we could end up in a 
> situation where a split of a region might be triggered. That would close the 
> old region leading to failure of the ongoing replay. In discussions with 
> [~jeffreyz], it seemed to make sense to just disable split during recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9096) Disable split during log replay

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728857#comment-13728857
 ] 

Hudson commented on HBASE-9096:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #217 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/217/])
hbase-9096: Disable split during log replay (jeffreyz: rev 1510107)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Disable split during log replay
> ---
>
> Key: HBASE-9096
> URL: https://issues.apache.org/jira/browse/HBASE-9096
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Jeffrey Zhong
> Fix For: 0.98.0, 0.95.2
>
> Attachments: hbase-9096.patch
>
>
> When regions are allowed to take writes during recovery, we could end up in a 
> situation where a split of a region might be triggered. That would close the 
> old region leading to failure of the ongoing replay. In discussions with 
> [~jeffreyz], it seemed to make sense to just disable split during recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9098) During recovery use ZK as the source of truth for region state

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728858#comment-13728858
 ] 

Hudson commented on HBASE-9098:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #217 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/217/])
hbase-9098: During recovery use ZK as the source of truth for region state 
(jeffreyz: rev 1510102)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* /hbase/branches/0.95/hbase-protocol/src/main/protobuf/hbase.proto
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java


> During recovery use ZK as the source of truth for region state 
> ---
>
> Key: HBASE-9098
> URL: https://issues.apache.org/jira/browse/HBASE-9098
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.95.0
>Reporter: Devaraj Das
>Assignee: Jeffrey Zhong
>Priority: Blocker
> Fix For: 0.95.2
>
> Attachments: hbase-9098.patch, hbase-9098-v1.patch
>
>
> In HLogSplitter:locateRegionAndRefreshLastFlushedSequenceId(HConnection, 
> byte[], byte[], String), we talk to the replayee regionserver to figure out 
> whether a region is in recovery or not. We should look at ZK only for this 
> piece of information (since that is the source of truth for recovery 
> otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9120) ClassFinder logs errors that are not

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728856#comment-13728856
 ] 

Hudson commented on HBASE-9120:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #217 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/217/])
HBASE-9120 ClassFinder logs errors that are not (stack: rev 1510095)
* 
/hbase/branches/0.95/hbase-common/src/test/java/org/apache/hadoop/hbase/ClassFinder.java


> ClassFinder logs errors that are not
> 
>
> Key: HBASE-9120
> URL: https://issues.apache.org/jira/browse/HBASE-9120
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.94.10
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9120_trunk.txt, 92120_trunkv2.txt, HBASE-9120.patch
>
>
> ClassFinder logs error messages that are not actionable, so they just cause 
> distraction

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9096) Disable split during log replay

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728849#comment-13728849
 ] 

Hudson commented on HBASE-9096:
---

SUCCESS: Integrated in HBase-TRUNK #4340 (See 
[https://builds.apache.org/job/HBase-TRUNK/4340/])
hbase-9096: Disable split during log replay (jeffreyz: rev 1510105)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Disable split during log replay
> ---
>
> Key: HBASE-9096
> URL: https://issues.apache.org/jira/browse/HBASE-9096
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Jeffrey Zhong
> Fix For: 0.98.0, 0.95.2
>
> Attachments: hbase-9096.patch
>
>
> When regions are allowed to take writes during recovery, we could end up in a 
> situation where a split of a region might be triggered. That would close the 
> old region leading to failure of the ongoing replay. In discussions with 
> [~jeffreyz], it seemed to make sense to just disable split during recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9096) Disable split during log replay

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728844#comment-13728844
 ] 

Hudson commented on HBASE-9096:
---

SUCCESS: Integrated in hbase-0.95 #402 (See 
[https://builds.apache.org/job/hbase-0.95/402/])
hbase-9096: Disable split during log replay (jeffreyz: rev 1510107)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Disable split during log replay
> ---
>
> Key: HBASE-9096
> URL: https://issues.apache.org/jira/browse/HBASE-9096
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Jeffrey Zhong
> Fix For: 0.98.0, 0.95.2
>
> Attachments: hbase-9096.patch
>
>
> When regions are allowed to take writes during recovery, we could end up in a 
> situation where a split of a region might be triggered. That would close the 
> old region leading to failure of the ongoing replay. In discussions with 
> [~jeffreyz], it seemed to make sense to just disable split during recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9098) During recovery use ZK as the source of truth for region state

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728845#comment-13728845
 ] 

Hudson commented on HBASE-9098:
---

SUCCESS: Integrated in hbase-0.95 #402 (See 
[https://builds.apache.org/job/hbase-0.95/402/])
hbase-9098: During recovery use ZK as the source of truth for region state 
(jeffreyz: rev 1510102)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* /hbase/branches/0.95/hbase-protocol/src/main/protobuf/hbase.proto
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java


> During recovery use ZK as the source of truth for region state 
> ---
>
> Key: HBASE-9098
> URL: https://issues.apache.org/jira/browse/HBASE-9098
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.95.0
>Reporter: Devaraj Das
>Assignee: Jeffrey Zhong
>Priority: Blocker
> Fix For: 0.95.2
>
> Attachments: hbase-9098.patch, hbase-9098-v1.patch
>
>
> In HLogSplitter:locateRegionAndRefreshLastFlushedSequenceId(HConnection, 
> byte[], byte[], String), we talk to the replayee regionserver to figure out 
> whether a region is in recovery or not. We should look at ZK only for this 
> piece of information (since that is the source of truth for recovery 
> otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9112) Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on TableMapper

2013-08-04 Thread Debanjan Bhattacharyya (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728831#comment-13728831
 ] 

Debanjan Bhattacharyya commented on HBASE-9112:
---

Well, my mapper extended TableMapper, so ideally that should have been 
considered as a job attribute. But the existing code only picks the output key 
and value classes. It should include job.getMapperClass() too.

> Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on 
> TableMapper
> -
>
> Key: HBASE-9112
> URL: https://issues.apache.org/jira/browse/HBASE-9112
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, mapreduce
>Affects Versions: 0.94.6.1
> Environment: CDH-4.3.0-1.cdh4.3.0.p0.22
>Reporter: Debanjan Bhattacharyya
>Assignee: Nick Dimiduk
>
> When using custom TableInputFormat in TableMapReduceUtil.initTableMapperJob 
> in the following way
> TableMapReduceUtil.initTableMapperJob("mytable", 
>   MyScan, 
>   MyMapper.class,
>   MyKey.class, 
>   MyValue.class, 
>   myJob,true,  
> MyTableInputFormat.class);
> I get error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.mapreduce.TableMapper
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
>   at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> If I do not use the last two parameters, there is no error.
> What is going wrong here?
> Thanks
> Regards

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9098) During recovery use ZK as the source of truth for region state

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728804#comment-13728804
 ] 

Hudson commented on HBASE-9098:
---

SUCCESS: Integrated in HBase-TRUNK #4339 (See 
[https://builds.apache.org/job/HBase-TRUNK/4339/])
hbase-9098: During recovery use ZK as the source of truth for region state 
(jeffreyz: rev 1510101)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* /hbase/trunk/hbase-protocol/src/main/protobuf/hbase.proto
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java


> During recovery use ZK as the source of truth for region state 
> ---
>
> Key: HBASE-9098
> URL: https://issues.apache.org/jira/browse/HBASE-9098
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.95.0
>Reporter: Devaraj Das
>Assignee: Jeffrey Zhong
>Priority: Blocker
> Fix For: 0.95.2
>
> Attachments: hbase-9098.patch, hbase-9098-v1.patch
>
>
> In HLogSplitter:locateRegionAndRefreshLastFlushedSequenceId(HConnection, 
> byte[], byte[], String), we talk to the replayee regionserver to figure out 
> whether a region is in recovery or not. We should look at ZK only for this 
> piece of information (since that is the source of truth for recovery 
> otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9120) ClassFinder logs errors that are not

2013-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728784#comment-13728784
 ] 

Hudson commented on HBASE-9120:
---

SUCCESS: Integrated in hbase-0.95 #401 (See 
[https://builds.apache.org/job/hbase-0.95/401/])
HBASE-9120 ClassFinder logs errors that are not (stack: rev 1510095)
* 
/hbase/branches/0.95/hbase-common/src/test/java/org/apache/hadoop/hbase/ClassFinder.java


> ClassFinder logs errors that are not
> 
>
> Key: HBASE-9120
> URL: https://issues.apache.org/jira/browse/HBASE-9120
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.94.10
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9120_trunk.txt, 92120_trunkv2.txt, HBASE-9120.patch
>
>
> ClassFinder logs error messages that are not actionable, so they just cause 
> distraction

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9096) Disable split during log replay

2013-08-04 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-9096:
-

   Resolution: Fixed
Fix Version/s: 0.95.2
   0.98.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

> Disable split during log replay
> ---
>
> Key: HBASE-9096
> URL: https://issues.apache.org/jira/browse/HBASE-9096
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Jeffrey Zhong
> Fix For: 0.98.0, 0.95.2
>
> Attachments: hbase-9096.patch
>
>
> When regions are allowed to take writes during recovery, we could end up in a 
> situation where a split of a region might be triggered. That would close the 
> old region leading to failure of the ongoing replay. In discussions with 
> [~jeffreyz], it seemed to make sense to just disable split during recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9096) Disable split during log replay

2013-08-04 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728781#comment-13728781
 ] 

Jeffrey Zhong commented on HBASE-9096:
--

Integrated the patch into 0.95 and trunk branch. Thanks [~te...@apache.org] for 
reviews!

> Disable split during log replay
> ---
>
> Key: HBASE-9096
> URL: https://issues.apache.org/jira/browse/HBASE-9096
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Jeffrey Zhong
> Attachments: hbase-9096.patch
>
>
> When regions are allowed to take writes during recovery, we could end up in a 
> situation where a split of a region might be triggered. That would close the 
> old region leading to failure of the ongoing replay. In discussions with 
> [~jeffreyz], it seemed to make sense to just disable split during recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira