[jira] [Updated] (HBASE-5813) Retry immediately after a NotServingRegionException in a multiput

2013-11-06 Thread Davanum Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davanum Srinivas updated HBASE-5813:


Status: Patch Available  (was: Open)

> Retry immediately after a NotServingRegionException in a multiput
> -
>
> Key: HBASE-5813
> URL: https://issues.apache.org/jira/browse/HBASE-5813
> Project: HBase
>  Issue Type: Improvement
>Reporter: Mikhail Bautin
>Assignee: Mikhail Bautin
> Attachments: ASF.LICENSE.NOT.GRANTED--D2847.1.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.10.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.11.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.12.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.2.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.3.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.4.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.5.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.6.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.7.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.8.patch, ASF.LICENSE.NOT.GRANTED--D2847.9.patch
>
>
> After we get some errors in a multiput we invalidate the region location 
> cache and wait for the configured time interval according to the backoff 
> policy. However, if all "errors" in multiput processing were 
> NotServingRegionExceptions, we don't really need to wait. We can retry 
> immediately.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9000) Linear reseek in Memstore

2013-11-06 Thread Chao Shi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814700#comment-13814700
 ] 

Chao Shi commented on HBASE-9000:
-

bq. Should we do the same thing in StoreFileScanner? 
Yes, I think so.

bq. If so, why not do this in StoreScanner, for example, call next some times 
before call reseek...
This is because StoreScanner does not have enough knownlege to judge whether do 
a reseek vs. several times of next. As  discussed before in this thread, an 
attempt to call next to do linear reseek may hit uncached block, whose cost is 
huge compared to a logarithmic reseek that touches only cached index blocks.

> Linear reseek in Memstore
> -
>
> Key: HBASE-9000
> URL: https://issues.apache.org/jira/browse/HBASE-9000
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.89-fb
>Reporter: Shane Hogan
>Priority: Minor
> Fix For: 0.89-fb
>
> Attachments: hbase-9000-benchmark-program.patch, 
> hbase-9000-port-fb.patch, hbase-9000.patch
>
>
> This is to address the linear reseek in MemStoreScanner. Currently reseek 
> iterates over the kvset and the snapshot linearly by just calling next 
> repeatedly. The new solution is to do this linear seek up to a configurable 
> maximum amount of times then if the seek is not yet complete fall back to 
> logarithmic seek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9901) Add a toString in HTable, fix a log in AssignmentManager

2013-11-06 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HBASE-9901:
--

 Summary: Add a toString in HTable, fix a log in AssignmentManager
 Key: HBASE-9901
 URL: https://issues.apache.org/jira/browse/HBASE-9901
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.98.0, 0.96.1






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9900) Fix unintended byte[].toString in AccessController

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814719#comment-13814719
 ] 

Hadoop QA commented on HBASE-9900:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612330/9900.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7746//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7746//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7746//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7746//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7746//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7746//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7746//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7746//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7746//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7746//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7746//console

This message is automatically generated.

> Fix unintended byte[].toString in AccessController
> --
>
> Key: HBASE-9900
> URL: https://issues.apache.org/jira/browse/HBASE-9900
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.96.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 9900.patch
>
>
> Found while running FindBugs for another change.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9901) Add a toString in HTable, fix a log in AssignmentManager

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9901:
---

Attachment: 9901.v1.patch

> Add a toString in HTable, fix a log in AssignmentManager
> 
>
> Key: HBASE-9901
> URL: https://issues.apache.org/jira/browse/HBASE-9901
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9901.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9901) Add a toString in HTable, fix a log in AssignmentManager

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9901:
---

Status: Patch Available  (was: Open)

> Add a toString in HTable, fix a log in AssignmentManager
> 
>
> Key: HBASE-9901
> URL: https://issues.apache.org/jira/browse/HBASE-9901
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9901.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9885:
---

Status: Open  (was: Patch Available)

> Avoid some Result creation in protobuf conversions
> --
>
> Key: HBASE-9885
> URL: https://issues.apache.org/jira/browse/HBASE-9885
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch
>
>
> We creates a lot of Result that we could avoid, as they contain nothing else 
> than a boolean value. We create sometimes a protobuf builder as well on this 
> path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9885:
---

Attachment: 9885.v3.patch

> Avoid some Result creation in protobuf conversions
> --
>
> Key: HBASE-9885
> URL: https://issues.apache.org/jira/browse/HBASE-9885
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch
>
>
> We creates a lot of Result that we could avoid, as they contain nothing else 
> than a boolean value. We create sometimes a protobuf builder as well on this 
> path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9885:
---

Status: Patch Available  (was: Open)

> Avoid some Result creation in protobuf conversions
> --
>
> Key: HBASE-9885
> URL: https://issues.apache.org/jira/browse/HBASE-9885
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch
>
>
> We creates a lot of Result that we could avoid, as they contain nothing else 
> than a boolean value. We create sometimes a protobuf builder as well on this 
> path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8323) Low hanging checksum improvements

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814737#comment-13814737
 ] 

Nicolas Liochon commented on HBASE-8323:


I saw it while looking at a server. The class NativeCrc32 is not public in 
hadoop common, (while PureJavaCrc32C is public, in the java & hadoop meanings 
of the terms). The easiest way for HBase would be to reuse directly 
NativeCrc32...  [~t...@lipcon.org], is there any problem if NativeCrc32 becomes 
public, or at least LimitedPrivate?

> Low hanging checksum improvements
> -
>
> Key: HBASE-8323
> URL: https://issues.apache.org/jira/browse/HBASE-8323
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: Enis Soztutar
>
> Over at Hadoop land, [~tlipcon] had done some improvements for checksums, a 
> native implementation for CRC32C (HADOOP-7445) and bulk verify of checksums 
> (HADOOP-7444). 
> In HBase, we can do
>  - Also develop a bulk verify API. Regardless of 
> hbase.hstore.bytes.per.checksum we always want to verify of the whole 
> checksum for the hfile block.
>  - Enable NativeCrc32 to be used as a checksum algo. It is not clear how much 
> gain we can expect over pure java CRC32. 
> Though, longer term we should focus on convincing hdfs guys for inline 
> checksums (HDFS-2699)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-7662) [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814745#comment-13814745
 ] 

Hadoop QA commented on HBASE-7662:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612329/7662.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 4 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.access.TestAccessControlFilter

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7747//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7747//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7747//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7747//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7747//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7747//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7747//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7747//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7747//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7747//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7747//console

This message is automatically generated.

> [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags
> ---
>
> Key: HBASE-7662
> URL: https://issues.apache.org/jira/browse/HBASE-7662
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 7662.patch, 7662.patch, 7662.patch, 7662.patch
>
>
> We can improve the performance of per-cell authorization if the read of the 
> cell ACL, if any, is combined with the sequential read of the cell data 
> already in progress. When tags are inlined with KVs in block encoding (see 
> HBASE-7448, and more generally HBASE-7233), we can use them to carry cell 
> ACLs instead of using out-of-line storage (HBASE-7661) for that purpose.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9892) Add info port to ServerName to support multi instances in a node

2013-11-06 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814752#comment-13814752
 ] 

Liu Shaohui commented on HBASE-9892:


Upload a new patch for [~enis]'s review. 
If this patch is ok, i will start to make a trunk patch. 

> Add info port to ServerName to support multi instances in a node
> 
>
> Key: HBASE-9892
> URL: https://issues.apache.org/jira/browse/HBASE-9892
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Attachments: HBASE-9892-0.94-v1.diff, HBASE-9892-0.94-v2.diff
>
>
> The full GC time of  regionserver with big heap(> 30G ) usually  can not be 
> controlled in 30s. At the same time, the servers with 64G memory are normal. 
> So we try to deploy multi rs instances(2-3 ) in a single node and the heap of 
> each rs is about 20G ~ 24G.
> Most of the things works fine, except the hbase web ui. The master get the RS 
> info port from conf, which is suitable for this situation of multi rs  
> instances in a node. So we add info port to ServerName.
> a. at the startup, rs report it's info port to Hmaster.
> b, For root region, rs write the servername with info port ro the zookeeper 
> root-region-server node.
> c, For meta regions, rs write the servername with info port to root region 
> d. For user regions,  rs write the servername with info port to meta regions 
> So hmaster and client can get info port from the servername.
> To test this feature, I change the rs num from 1 to 3 in standalone mode, so 
> we can test it in standalone mode,
> I think Hoya(hbase on yarn) will encounter the same problem.  Anyone knows 
> how Hoya handle this problem?
> PS: There are  different formats for servername in zk node and meta table, i 
> think we need to unify it and refactor the code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9892) Add info port to ServerName to support multi instances in a node

2013-11-06 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-9892:
---

Attachment: HBASE-9892-0.94-v3.diff

Update for [~enis]'s review.  Please review it in RB. thx

> Add info port to ServerName to support multi instances in a node
> 
>
> Key: HBASE-9892
> URL: https://issues.apache.org/jira/browse/HBASE-9892
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Attachments: HBASE-9892-0.94-v1.diff, HBASE-9892-0.94-v2.diff, 
> HBASE-9892-0.94-v3.diff
>
>
> The full GC time of  regionserver with big heap(> 30G ) usually  can not be 
> controlled in 30s. At the same time, the servers with 64G memory are normal. 
> So we try to deploy multi rs instances(2-3 ) in a single node and the heap of 
> each rs is about 20G ~ 24G.
> Most of the things works fine, except the hbase web ui. The master get the RS 
> info port from conf, which is suitable for this situation of multi rs  
> instances in a node. So we add info port to ServerName.
> a. at the startup, rs report it's info port to Hmaster.
> b, For root region, rs write the servername with info port ro the zookeeper 
> root-region-server node.
> c, For meta regions, rs write the servername with info port to root region 
> d. For user regions,  rs write the servername with info port to meta regions 
> So hmaster and client can get info port from the servername.
> To test this feature, I change the rs num from 1 to 3 in standalone mode, so 
> we can test it in standalone mode,
> I think Hoya(hbase on yarn) will encounter the same problem.  Anyone knows 
> how Hoya handle this problem?
> PS: There are  different formats for servername in zk node and meta table, i 
> think we need to unify it and refactor the code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9855) evictBlocksByHfileName improvement for bucket cache

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814763#comment-13814763
 ] 

Nicolas Liochon commented on HBASE-9855:


Sorry about the false alarm, [~xieliang007]. I swear I really ran the tests 50 
times :-)! Thanks for finding out the right jira that caused the issue.

> evictBlocksByHfileName improvement for bucket cache
> ---
>
> Key: HBASE-9855
> URL: https://issues.apache.org/jira/browse/HBASE-9855
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBase-9855-v4.txt
>
>
> In deed, it comes from fb's l2 cache by [~avf]'s nice work,  i just did a 
> simple backport here. It could improve a linear-time search through the whole 
> cache map into a log-access-time map search.
> I did a small bench, showed it brings a bit gc overhead, but considering the 
> evict on close triggered by frequent compaction activity, seems reasonable?
> and i thought bring a "evictOnClose" config  into BucketCache ctor and only 
> put/remove the new index map while evictOnClose is true, seems this value 
> could be set by each family schema, but BucketCache is a global instance not 
> per each family, so just ignore it rightnow...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9855) evictBlocksByHfileName improvement for bucket cache

2013-11-06 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814765#comment-13814765
 ] 

Liang Xie commented on HBASE-9855:
--

[~dnicolas] haha:)

> evictBlocksByHfileName improvement for bucket cache
> ---
>
> Key: HBASE-9855
> URL: https://issues.apache.org/jira/browse/HBASE-9855
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBase-9855-v4.txt
>
>
> In deed, it comes from fb's l2 cache by [~avf]'s nice work,  i just did a 
> simple backport here. It could improve a linear-time search through the whole 
> cache map into a log-access-time map search.
> I did a small bench, showed it brings a bit gc overhead, but considering the 
> evict on close triggered by frequent compaction activity, seems reasonable?
> and i thought bring a "evictOnClose" config  into BucketCache ctor and only 
> put/remove the new index map while evictOnClose is true, seems this value 
> could be set by each family schema, but BucketCache is a global instance not 
> per each family, so just ignore it rightnow...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9873) Some improvements in hlog and hlog split

2013-11-06 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814809#comment-13814809
 ] 

Liu Shaohui commented on HBASE-9873:


Sorry for late reply.

[~liochon]
{quote}
2) Add a background hlog compaction thread to compaction the hlog: remove 
the hlog entries whose data have been flushed to hfile. The scenario is that in 
a share cluster, write requests of a table may very little and periodical, a 
lots of hlogs can not be cleaned for entries of this table in those hlogs.

hbase.regionserver.optionalcacheflushinterval can be used to limit the effect 
of such tables. In my mind, this should be set to something like 10 minutes 
max, the default (1 hour) is very conservative.
{quote}
Yes, this config does help. But I think small optionalcacheflushinterval will 
bring small hfiles, which sacrifices read latency and brings more compacts.

{quote}
7) Consider the hlog data locality when schedule the hlog split task. 
Schedule the hlog to a splitter which is near to hlog data.

We have a JIRA HBASE-6772 on this.

I've got it partly done, actually. I need to finish it and test it.
{quote}
Great.  I can help to test and review it if needed.

{quote}
8) Support multi hlog writers and switching to another hlog writer when 
long write latency to current hlog due to possible temporary network spike?

As in the original big table paper you mean? I agree. 
{quote}
Yes.

> Some improvements in hlog and hlog split
> 
>
> Key: HBASE-9873
> URL: https://issues.apache.org/jira/browse/HBASE-9873
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, wal
>Reporter: Liu Shaohui
>Priority: Critical
>  Labels: failover, hlog
>
> Some improvements in hlog and hlog split
> 1) Try to clean old hlog after each memstore flush to avoid unnecessary hlogs 
> split in failover.  Now hlogs cleaning only be run in rolling hlog writer. 
> 2) Add a background hlog compaction thread to compaction the hlog: remove the 
> hlog entries whose data have been flushed to hfile. The scenario is that in a 
> share cluster, write requests of a table may very little and periodical,  a 
> lots of hlogs can not be cleaned for entries of this table in those hlogs.
> 3) Rely on the smallest of all biggest hfile's seqId of previous served 
> regions to ignore some entries.  Facebook have implemented this in HBASE-6508 
> and we backport it to hbase 0.94 in HBASE-9568.
> 4) Support running multiple hlog splitters on a single RS and on 
> master(latter can boost split efficiency for tiny cluster)
> 5) Enable multiple splitters on 'big' hlog file by splitting(logically) hlog 
> to slices(configurable size, eg hdfs trunk size 64M)
> support concurrent multiple split tasks on a single hlog file slice 
> 6) Do not cancel the timeout split task until one task reports it succeeds 
> (avoids scenario where split for a hlog file fails due to no one task can 
> succeed within the timeout period ), and and reschedule a same split task to 
> reduce split time ( to avoid some straggler in hlog split)
> 7) Consider the hlog data locality when schedule the hlog split task.  
> Schedule the hlog to a splitter which is near to hlog data.
> 8) Support multi hlog writers and switching to another hlog writer when long 
> write latency to current hlog due to possible temporary network spike? 
> This is a draft which lists the improvements about hlog we try to implement 
> in the near future. Comments and discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814810#comment-13814810
 ] 

Hadoop QA commented on HBASE-9885:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612341/9885.v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.TestZooKeeper.testRegionAssignmentAfterMasterRecoveryDueToZKExpiry(TestZooKeeper.java:488)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7748//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7748//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7748//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7748//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7748//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7748//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7748//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7748//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7748//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7748//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7748//console

This message is automatically generated.

> Avoid some Result creation in protobuf conversions
> --
>
> Key: HBASE-9885
> URL: https://issues.apache.org/jira/browse/HBASE-9885
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch
>
>
> We creates a lot of Result that we could avoid, as they contain nothing else 
> than a boolean value. We create sometimes a protobuf builder as well on this 
> path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HBASE-7927) Two versions of netty with hadoop.profile=2.0: 3.5.9 and 3.2.4

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon resolved HBASE-7927.


Resolution: Fixed

We're fine in 0.96 now.

> Two versions of netty with hadoop.profile=2.0: 3.5.9 and 3.2.4
> --
>
> Key: HBASE-7927
> URL: https://issues.apache.org/jira/browse/HBASE-7927
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.95.2
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>
> I don't know why, but when you do a mvn dependency:tree, everything looks 
> fine. When you look at the generated target/cached_classpath.txt you see 2 
> versions of netty: netty-3.2.4.Final.jar and netty-3.5.9.Final.jar.
> This is bad and can lead to unpredictable behavior.
> I haven't looked at the other dependencies.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-5813) Retry immediately after a NotServingRegionException in a multiput

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814820#comment-13814820
 ] 

Hadoop QA commented on HBASE-5813:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12523281/ASF.LICENSE.NOT.GRANTED--D2847.12.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7750//console

This message is automatically generated.

> Retry immediately after a NotServingRegionException in a multiput
> -
>
> Key: HBASE-5813
> URL: https://issues.apache.org/jira/browse/HBASE-5813
> Project: HBase
>  Issue Type: Improvement
>Reporter: Mikhail Bautin
>Assignee: Mikhail Bautin
> Attachments: ASF.LICENSE.NOT.GRANTED--D2847.1.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.10.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.11.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.12.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.2.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.3.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.4.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.5.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.6.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.7.patch, 
> ASF.LICENSE.NOT.GRANTED--D2847.8.patch, ASF.LICENSE.NOT.GRANTED--D2847.9.patch
>
>
> After we get some errors in a multiput we invalidate the region location 
> cache and wait for the configured time interval according to the backoff 
> policy. However, if all "errors" in multiput processing were 
> NotServingRegionExceptions, we don't really need to wait. We can retry 
> immediately.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-8625) Dependency version upgrade

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-8625:
---

   Resolution: Fixed
Fix Version/s: 0.96.0
   Status: Resolved  (was: Patch Available)

Was fixed a while ago in some other jira.

> Dependency version upgrade
> --
>
> Key: HBASE-8625
> URL: https://issues.apache.org/jira/browse/HBASE-8625
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.0, 0.95.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 8625.v1.patch, 8625.v2.patch
>
>
> junit dependency should be scoped "test"
> We should use a newer versions of jaxb-api. One of our 3rd party would prefer 
> a newer one: 
>  javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2)
>  Last is 2.2.4.
>  
> Not mandatory, but should be done:
>  guava 14.0.1
>  netty 3.6.6.Final
>  commons-codec.version 1.8
>  jackson.version 1.9.3



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9901) Add a toString in HTable, fix a log in AssignmentManager

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814824#comment-13814824
 ] 

Hadoop QA commented on HBASE-9901:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612340/9901.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7749//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7749//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7749//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7749//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7749//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7749//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7749//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7749//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7749//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7749//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7749//console

This message is automatically generated.

> Add a toString in HTable, fix a log in AssignmentManager
> 
>
> Key: HBASE-9901
> URL: https://issues.apache.org/jira/browse/HBASE-9901
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9901.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9901) Add a toString in HTable, fix a log in AssignmentManager

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814834#comment-13814834
 ] 

Nicolas Liochon commented on HBASE-9901:


There is no javadoc in the patch, and the findbugs in hbase-client or 
hbase-servers seems unrelated...

> Add a toString in HTable, fix a log in AssignmentManager
> 
>
> Key: HBASE-9901
> URL: https://issues.apache.org/jira/browse/HBASE-9901
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9901.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9874) Append and Increment operation drops Tags

2013-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814836#comment-13814836
 ] 

Hudson commented on HBASE-9874:
---

SUCCESS: Integrated in HBase-TRUNK #4670 (See 
[https://builds.apache.org/job/HBase-TRUNK/4670/])
HBASE-9874 Append and Increment operation drops Tags (anoopsamjohn: rev 1539224)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java


> Append and Increment operation drops Tags
> -
>
> Key: HBASE-9874
> URL: https://issues.apache.org/jira/browse/HBASE-9874
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.98.0
>
> Attachments: AccessController.postMutationBeforeWAL.txt, 
> HBASE-9874.patch, HBASE-9874_V2.patch, HBASE-9874_V3.patch
>
>
> We should consider tags in the existing cells as well as tags coming in the 
> cells within Increment/Append



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8541) implement flush-into-stripes in stripe compactions

2013-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814837#comment-13814837
 ] 

Hudson commented on HBASE-8541:
---

SUCCESS: Integrated in HBase-TRUNK #4670 (See 
[https://builds.apache.org/job/HBase-TRUNK/4670/])
HBASE-8541 implement flush-into-stripes in stripe compactions (sershe: rev 
1539211)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFileManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFlusher.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFlusher.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeMultiFileWriter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreConfig.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreEngine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFileManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFlusher.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactionPolicy.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeCompactor.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreFileManager.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestStripeCompactionPolicy.java


> implement flush-into-stripes in stripe compactions
> --
>
> Key: HBASE-8541
> URL: https://issues.apache.org/jira/browse/HBASE-8541
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-8541-latest-with-dependencies.patch, 
> HBASE-8541-latest-with-dependencies.patch, 
> HBASE-8541-latest-with-dependencies.patch, 
> HBASE-8541-latest-with-dependencies.patch, HBASE-8541-v0.patch, 
> HBASE-8541-v1.patch, HBASE-8541-v2.patch, HBASE-8541-v3.patch, 
> HBASE-8541-v4.patch, HBASE-8541-v5.patch
>
>
> Flush will be able to flush into multiple files under this design, avoiding 
> L0 I/O amplification.
> I have the patch which is missing just one feature - support for concurrent 
> flushes and stripe changes. This can be done via extensive try-locking of 
> stripe changes and flushes, or advisory flags without blocking flushes, 
> dumping conflicting flushes into L0 in case of (very rare) collisions. For 
> file loading for the latter, a set-cover-like problem needs to be solved to 
> determine optimal stripes. That will also address Jimmy's concern of getting 
> rid of metadata, btw. However currently I don't have time for that. I plan to 
> attach the try-locking patch first, but this won't happen for a couple weeks 
> probably and should not block main reviews. Hopefully this will be added on 
> top of main reviews.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9873) Some improvements in hlog and hlog split

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814843#comment-13814843
 ] 

Nicolas Liochon commented on HBASE-9873:


bq. Yes, this config does help. But I think small optionalcacheflushinterval 
will bring small hfiles, which sacrifices read latency and brings more compacts
Well, if the writes are "very little and periodical", it should not be an 
issue, no?. If we're speaking about table heavily written, the regions should 
be flushed by the standard flush.

bq. Great. I can help to test and review it if needed.
Thanks a lot, I will send you the patch when it's ready.

> Some improvements in hlog and hlog split
> 
>
> Key: HBASE-9873
> URL: https://issues.apache.org/jira/browse/HBASE-9873
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, wal
>Reporter: Liu Shaohui
>Priority: Critical
>  Labels: failover, hlog
>
> Some improvements in hlog and hlog split
> 1) Try to clean old hlog after each memstore flush to avoid unnecessary hlogs 
> split in failover.  Now hlogs cleaning only be run in rolling hlog writer. 
> 2) Add a background hlog compaction thread to compaction the hlog: remove the 
> hlog entries whose data have been flushed to hfile. The scenario is that in a 
> share cluster, write requests of a table may very little and periodical,  a 
> lots of hlogs can not be cleaned for entries of this table in those hlogs.
> 3) Rely on the smallest of all biggest hfile's seqId of previous served 
> regions to ignore some entries.  Facebook have implemented this in HBASE-6508 
> and we backport it to hbase 0.94 in HBASE-9568.
> 4) Support running multiple hlog splitters on a single RS and on 
> master(latter can boost split efficiency for tiny cluster)
> 5) Enable multiple splitters on 'big' hlog file by splitting(logically) hlog 
> to slices(configurable size, eg hdfs trunk size 64M)
> support concurrent multiple split tasks on a single hlog file slice 
> 6) Do not cancel the timeout split task until one task reports it succeeds 
> (avoids scenario where split for a hlog file fails due to no one task can 
> succeed within the timeout period ), and and reschedule a same split task to 
> reduce split time ( to avoid some straggler in hlog split)
> 7) Consider the hlog data locality when schedule the hlog split task.  
> Schedule the hlog to a splitter which is near to hlog data.
> 8) Support multi hlog writers and switching to another hlog writer when long 
> write latency to current hlog due to possible temporary network spike? 
> This is a draft which lists the improvements about hlog we try to implement 
> in the near future. Comments and discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9873) Some improvements in hlog and hlog split

2013-11-06 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814844#comment-13814844
 ] 

Liu Shaohui commented on HBASE-9873:


[~liochon]
{quote}
6) Do not cancel the timeout split task until one task reports it succeeds 
(avoids scenario where split for a hlog file fails due to no one task can 
succeed within the timeout period ), and and reschedule a same split task to 
reduce split time ( to avoid some straggler in hlog split)

That's not HBASE-6738?
{quote}
Part of this suggestion is HBASE-6738.  Actually, we want to intro a 
speculative scheduler for hlog tasks, as the speculative scheduler for 
map/reduce tasks in mapreduce. HBASE-6738 resubmits the task after a 
configurable timeout, and interrupt the old task. But we want to resubmit it 
early if there is an idle split worker and we think it may finish split task 
earlier than the old one.  There is no timeout config and "resubmittion" do not 
interrupt the old task.



> Some improvements in hlog and hlog split
> 
>
> Key: HBASE-9873
> URL: https://issues.apache.org/jira/browse/HBASE-9873
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, wal
>Reporter: Liu Shaohui
>Priority: Critical
>  Labels: failover, hlog
>
> Some improvements in hlog and hlog split
> 1) Try to clean old hlog after each memstore flush to avoid unnecessary hlogs 
> split in failover.  Now hlogs cleaning only be run in rolling hlog writer. 
> 2) Add a background hlog compaction thread to compaction the hlog: remove the 
> hlog entries whose data have been flushed to hfile. The scenario is that in a 
> share cluster, write requests of a table may very little and periodical,  a 
> lots of hlogs can not be cleaned for entries of this table in those hlogs.
> 3) Rely on the smallest of all biggest hfile's seqId of previous served 
> regions to ignore some entries.  Facebook have implemented this in HBASE-6508 
> and we backport it to hbase 0.94 in HBASE-9568.
> 4) Support running multiple hlog splitters on a single RS and on 
> master(latter can boost split efficiency for tiny cluster)
> 5) Enable multiple splitters on 'big' hlog file by splitting(logically) hlog 
> to slices(configurable size, eg hdfs trunk size 64M)
> support concurrent multiple split tasks on a single hlog file slice 
> 6) Do not cancel the timeout split task until one task reports it succeeds 
> (avoids scenario where split for a hlog file fails due to no one task can 
> succeed within the timeout period ), and and reschedule a same split task to 
> reduce split time ( to avoid some straggler in hlog split)
> 7) Consider the hlog data locality when schedule the hlog split task.  
> Schedule the hlog to a splitter which is near to hlog data.
> 8) Support multi hlog writers and switching to another hlog writer when long 
> write latency to current hlog due to possible temporary network spike? 
> This is a draft which lists the improvements about hlog we try to implement 
> in the near future. Comments and discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9885:
---

Status: Patch Available  (was: Open)

> Avoid some Result creation in protobuf conversions
> --
>
> Key: HBASE-9885
> URL: https://issues.apache.org/jira/browse/HBASE-9885
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
> 9885.v3.patch
>
>
> We creates a lot of Result that we could avoid, as they contain nothing else 
> than a boolean value. We create sometimes a protobuf builder as well on this 
> path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9885:
---

Status: Open  (was: Patch Available)

> Avoid some Result creation in protobuf conversions
> --
>
> Key: HBASE-9885
> URL: https://issues.apache.org/jira/browse/HBASE-9885
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
> 9885.v3.patch
>
>
> We creates a lot of Result that we could avoid, as they contain nothing else 
> than a boolean value. We create sometimes a protobuf builder as well on this 
> path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9885:
---

Attachment: 9885.v3.patch

> Avoid some Result creation in protobuf conversions
> --
>
> Key: HBASE-9885
> URL: https://issues.apache.org/jira/browse/HBASE-9885
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
> 9885.v3.patch
>
>
> We creates a lot of Result that we could avoid, as they contain nothing else 
> than a boolean value. We create sometimes a protobuf builder as well on this 
> path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9873) Some improvements in hlog and hlog split

2013-11-06 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814864#comment-13814864
 ] 

Liu Shaohui commented on HBASE-9873:


[~jeffreyz]
{quote}
5) Enable multiple splitters on 'big' hlog file by splitting(logically) 
hlog to slices(configurable size, eg hdfs trunk size 64M)
I'd wait for our multiple wal solution. Because it basically assumes we have 
the IO capacity but less worker slots while with the multiple splitter per RS 
and limiting wal size, the suggestion seems not needed.
{quote}
OK. We can consider this suggestion after multiple wal solution

{quote}
7) Consider the hlog data locality when schedule the hlog split task. 
Schedule the hlog to a splitter which is near to hlog data.

We have a JIRA HBASE-6772 on this.
{quote}
Thanks a lot. I will follow this JIRA.


> Some improvements in hlog and hlog split
> 
>
> Key: HBASE-9873
> URL: https://issues.apache.org/jira/browse/HBASE-9873
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, wal
>Reporter: Liu Shaohui
>Priority: Critical
>  Labels: failover, hlog
>
> Some improvements in hlog and hlog split
> 1) Try to clean old hlog after each memstore flush to avoid unnecessary hlogs 
> split in failover.  Now hlogs cleaning only be run in rolling hlog writer. 
> 2) Add a background hlog compaction thread to compaction the hlog: remove the 
> hlog entries whose data have been flushed to hfile. The scenario is that in a 
> share cluster, write requests of a table may very little and periodical,  a 
> lots of hlogs can not be cleaned for entries of this table in those hlogs.
> 3) Rely on the smallest of all biggest hfile's seqId of previous served 
> regions to ignore some entries.  Facebook have implemented this in HBASE-6508 
> and we backport it to hbase 0.94 in HBASE-9568.
> 4) Support running multiple hlog splitters on a single RS and on 
> master(latter can boost split efficiency for tiny cluster)
> 5) Enable multiple splitters on 'big' hlog file by splitting(logically) hlog 
> to slices(configurable size, eg hdfs trunk size 64M)
> support concurrent multiple split tasks on a single hlog file slice 
> 6) Do not cancel the timeout split task until one task reports it succeeds 
> (avoids scenario where split for a hlog file fails due to no one task can 
> succeed within the timeout period ), and and reschedule a same split task to 
> reduce split time ( to avoid some straggler in hlog split)
> 7) Consider the hlog data locality when schedule the hlog split task.  
> Schedule the hlog to a splitter which is near to hlog data.
> 8) Support multi hlog writers and switching to another hlog writer when long 
> write latency to current hlog due to possible temporary network spike? 
> This is a draft which lists the improvements about hlog we try to implement 
> in the near future. Comments and discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814867#comment-13814867
 ] 

Hadoop QA commented on HBASE-9850:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612337/HBASE-9850.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.wal.TestLogRolling

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7751//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7751//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7751//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7751//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7751//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7751//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7751//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7751//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7751//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7751//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7751//console

This message is automatically generated.

> Issues with UI for table compact/split operation completion. After 
> split/compaction operation using UI, the page is not automatically 
> redirecting back using IE8/Firefox.
> -
>
> Key: HBASE-9850
> URL: https://issues.apache.org/jira/browse/HBASE-9850
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 0.94.11
>Reporter: Kashif J S
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: HBASE-9850.patch
>
>
> Steps:
> 1. create table with regions.
> 2. insert some amount of data in such a way that make some Hfiles which is 
> lessthan min.compacted files size (say 3 hfiles are there but min compaction 
> files 10)
> 3. from ui perform compact operation on the table.
> "TABLE ACTION REQUEST Accepted" page is displayed
> 4. operation is failing becoz compaction criteria is not meeting. but from ui 
> compaction requests are continously sending to server.  This happens using 
> IE(history.back() seems to resend compact/split request). Firefox seems Ok in 
> this case.
> 5. Later no automatic redirection to main hamster page occurs.
> SOLUTION:
> table.jsp in hbase master.
> The meta tag for HTML contains refresh with javascript:history.back().
> A javascript cannot execute in the meta refresh tag like above in table.jsp 
> and snapshot.jsp
> 
> This will FAIL.
> W3 school also suggests to use refresh in JAVAscript rather than meta tag.
> If above m

[jira] [Commented] (HBASE-9873) Some improvements in hlog and hlog split

2013-11-06 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814868#comment-13814868
 ] 

Liu Shaohui commented on HBASE-9873:


[~stack]
{quote}
1) Try to clean old hlog after each memstore flush to avoid unnecessary 
hlogs split in failover. Now hlogs cleaning only be run in rolling hlog writer.

Are we just scheduling more checks? Is that the idea? Doing it at flush time is 
a good idea as juncture for WAL-clean-up. Do you observe us lagging the cleanup 
by just doing it on log roll?
{quote}
Yes, It just schedules more checks for old hlogs. 
I will add some logs to check there are hlog lagging cleanups.

{quote}
2) Add a background hlog compaction thread to compaction the hlog: remove 
the hlog entries whose data have been flushed to hfile. The scenario is that in 
a share cluster, write requests of a table may very little and periodical, a 
lots of hlogs can not be cleaned for entries of this table in those hlogs.

Do you think this will help? You will have to do a bunch of reading and 
rewriting, right? You will only rewrite WALs that have at least some percentage 
of flushed edits? Would it be better to work on making it so we are better at 
flushing the memstores that have edits holding up our letting go of old WALs? 
Just asking.
{quote}
Yes, exactly.


> Some improvements in hlog and hlog split
> 
>
> Key: HBASE-9873
> URL: https://issues.apache.org/jira/browse/HBASE-9873
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, wal
>Reporter: Liu Shaohui
>Priority: Critical
>  Labels: failover, hlog
>
> Some improvements in hlog and hlog split
> 1) Try to clean old hlog after each memstore flush to avoid unnecessary hlogs 
> split in failover.  Now hlogs cleaning only be run in rolling hlog writer. 
> 2) Add a background hlog compaction thread to compaction the hlog: remove the 
> hlog entries whose data have been flushed to hfile. The scenario is that in a 
> share cluster, write requests of a table may very little and periodical,  a 
> lots of hlogs can not be cleaned for entries of this table in those hlogs.
> 3) Rely on the smallest of all biggest hfile's seqId of previous served 
> regions to ignore some entries.  Facebook have implemented this in HBASE-6508 
> and we backport it to hbase 0.94 in HBASE-9568.
> 4) Support running multiple hlog splitters on a single RS and on 
> master(latter can boost split efficiency for tiny cluster)
> 5) Enable multiple splitters on 'big' hlog file by splitting(logically) hlog 
> to slices(configurable size, eg hdfs trunk size 64M)
> support concurrent multiple split tasks on a single hlog file slice 
> 6) Do not cancel the timeout split task until one task reports it succeeds 
> (avoids scenario where split for a hlog file fails due to no one task can 
> succeed within the timeout period ), and and reschedule a same split task to 
> reduce split time ( to avoid some straggler in hlog split)
> 7) Consider the hlog data locality when schedule the hlog split task.  
> Schedule the hlog to a splitter which is near to hlog data.
> 8) Support multi hlog writers and switching to another hlog writer when long 
> write latency to current hlog due to possible temporary network spike? 
> This is a draft which lists the improvements about hlog we try to implement 
> in the near future. Comments and discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9873) Some improvements in hlog and hlog split

2013-11-06 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814875#comment-13814875
 ] 

Liu Shaohui commented on HBASE-9873:


[~stack] [~jeffreyz] [~liochon]
{quote}
3) Rely on the smallest of all biggest hfile's seqId of previous served regions 
to ignore some entries. Facebook have implemented this in HBASE-6508 and we 
backport it to hbase 0.94 in HBASE-9568.
{quote}
What about this? I think HBASE-6508 is useful. 
Could any one help to review HBASE-9568(The backport of HBASE-6508 to 0.94) ? 
We may backport HBASE-6508 to trunk later.

> Some improvements in hlog and hlog split
> 
>
> Key: HBASE-9873
> URL: https://issues.apache.org/jira/browse/HBASE-9873
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, wal
>Reporter: Liu Shaohui
>Priority: Critical
>  Labels: failover, hlog
>
> Some improvements in hlog and hlog split
> 1) Try to clean old hlog after each memstore flush to avoid unnecessary hlogs 
> split in failover.  Now hlogs cleaning only be run in rolling hlog writer. 
> 2) Add a background hlog compaction thread to compaction the hlog: remove the 
> hlog entries whose data have been flushed to hfile. The scenario is that in a 
> share cluster, write requests of a table may very little and periodical,  a 
> lots of hlogs can not be cleaned for entries of this table in those hlogs.
> 3) Rely on the smallest of all biggest hfile's seqId of previous served 
> regions to ignore some entries.  Facebook have implemented this in HBASE-6508 
> and we backport it to hbase 0.94 in HBASE-9568.
> 4) Support running multiple hlog splitters on a single RS and on 
> master(latter can boost split efficiency for tiny cluster)
> 5) Enable multiple splitters on 'big' hlog file by splitting(logically) hlog 
> to slices(configurable size, eg hdfs trunk size 64M)
> support concurrent multiple split tasks on a single hlog file slice 
> 6) Do not cancel the timeout split task until one task reports it succeeds 
> (avoids scenario where split for a hlog file fails due to no one task can 
> succeed within the timeout period ), and and reschedule a same split task to 
> reduce split time ( to avoid some straggler in hlog split)
> 7) Consider the hlog data locality when schedule the hlog split task.  
> Schedule the hlog to a splitter which is near to hlog data.
> 8) Support multi hlog writers and switching to another hlog writer when long 
> write latency to current hlog due to possible temporary network spike? 
> This is a draft which lists the improvements about hlog we try to implement 
> in the near future. Comments and discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox

2013-11-06 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814899#comment-13814899
 ] 

Kashif J S commented on HBASE-9850:
---

Hi Hadoop QA,

>>-1 tests included. The patch doesn't appear to include any new or modified 
>>tests.
>>Please justify why no new tests are needed for this patch.
>>Also please list what manual steps were performed to verify this patch.
This was UI issue. manually pressing the compact/split/clone/restore buttons 
would take you to "Request Action accepted" page. 
No Junit TC is required I guess. Do you write automation TC for UI ?

>>-1 javadoc. The javadoc tool appears to have generated 1 warning messages.
Since this patch involves modification of JSP pages (table.jsp and 
snapshot.jsp), I think this javadoc warning is not related to this patch. 
please confirm

>>-1 site. The patch appears to cause mvn site goal to fail.
I think this is not related to this patch. please confirm

>>-1 core tests. The patch failed these unit tests:
>>org.apache.hadoop.hbase.regionserver.wal.TestLogRolling
I think this is not related to this patch since this patch involves 
modification of JSP pages (table.jsp and snapshot.jsp). please confirm










> Issues with UI for table compact/split operation completion. After 
> split/compaction operation using UI, the page is not automatically 
> redirecting back using IE8/Firefox.
> -
>
> Key: HBASE-9850
> URL: https://issues.apache.org/jira/browse/HBASE-9850
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 0.94.11
>Reporter: Kashif J S
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: HBASE-9850.patch
>
>
> Steps:
> 1. create table with regions.
> 2. insert some amount of data in such a way that make some Hfiles which is 
> lessthan min.compacted files size (say 3 hfiles are there but min compaction 
> files 10)
> 3. from ui perform compact operation on the table.
> "TABLE ACTION REQUEST Accepted" page is displayed
> 4. operation is failing becoz compaction criteria is not meeting. but from ui 
> compaction requests are continously sending to server.  This happens using 
> IE(history.back() seems to resend compact/split request). Firefox seems Ok in 
> this case.
> 5. Later no automatic redirection to main hamster page occurs.
> SOLUTION:
> table.jsp in hbase master.
> The meta tag for HTML contains refresh with javascript:history.back().
> A javascript cannot execute in the meta refresh tag like above in table.jsp 
> and snapshot.jsp
> 
> This will FAIL.
> W3 school also suggests to use refresh in JAVAscript rather than meta tag.
> If above meta is replaced as below, the behavior is OK verified for 
> IE8/Firefox.
>   
>   
>   
> Hence in table.jsp and snapshot.jsp, it should be modified as above.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814912#comment-13814912
 ] 

Hadoop QA commented on HBASE-9885:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612365/9885.v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7752//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7752//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7752//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7752//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7752//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7752//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7752//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7752//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7752//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7752//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7752//console

This message is automatically generated.

> Avoid some Result creation in protobuf conversions
> --
>
> Key: HBASE-9885
> URL: https://issues.apache.org/jira/browse/HBASE-9885
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
> 9885.v3.patch
>
>
> We creates a lot of Result that we could avoid, as they contain nothing else 
> than a boolean value. We create sometimes a protobuf builder as well on this 
> path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). -> Regionserver node time is greater than master node time

2013-11-06 Thread Kashif J S (JIRA)
Kashif J S created HBASE-9902:
-

 Summary: Region Server is starting normally even if clock skew is 
more than default 30 seconds(or any configured). -> Regionserver node time is 
greater than master node time
 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S


When Region server's time is ahead of Master's time and the difference is more 
than hbase.master.maxclockskew value, region server startup is not failing with 
ClockOutOfSyncException.
This causes some abnormal behavior as detected by our Tests.

ServerManager.java#checkClockSkew
  long skew = System.currentTimeMillis() - serverCurrentTime;
if (skew > maxSkew) {
  String message = "Server " + serverName + " has been " +
"rejected; Reported time is too far out of sync with master.  " +
"Time difference of " + skew + "ms > max allowed of " + maxSkew + 
"ms";
  LOG.warn(message);
  throw new ClockOutOfSyncException(message);
}

Above line results in negative value when Master's time is lesser than 
region server time and  " if (skew > maxSkew) " check fails to find the skew in 
this case.


Please Note: This was tested in hbase 0.94.11 version and the trunk also 
currently has the same logic.

The fix for the same would be to make the skew positive value first as below:

 long skew = System.currentTimeMillis() - serverCurrentTime;
skew = (skew < 0 ? -skew : skew);
if (skew > maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). -> Regionserver node time is greater than master node time

2013-11-06 Thread Jyothi Mandava (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814936#comment-13814936
 ] 

Jyothi Mandava commented on HBASE-9902:
---

we can use change as below.

 long skew = Math.abs(System.currentTimeMillis() - serverCurrentTime);
 if (skew > maxSkew) 

> Region Server is starting normally even if clock skew is more than default 30 
> seconds(or any configured). -> Regionserver node time is greater than master 
> node time
> 
>
> Key: HBASE-9902
> URL: https://issues.apache.org/jira/browse/HBASE-9902
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: Kashif J S
>
> When Region server's time is ahead of Master's time and the difference is 
> more than hbase.master.maxclockskew value, region server startup is not 
> failing with ClockOutOfSyncException.
> This causes some abnormal behavior as detected by our Tests.
> ServerManager.java#checkClockSkew
>   long skew = System.currentTimeMillis() - serverCurrentTime;
> if (skew > maxSkew) {
>   String message = "Server " + serverName + " has been " +
> "rejected; Reported time is too far out of sync with master.  " +
> "Time difference of " + skew + "ms > max allowed of " + maxSkew + 
> "ms";
>   LOG.warn(message);
>   throw new ClockOutOfSyncException(message);
> }
> Above line results in negative value when Master's time is lesser than 
> region server time and  " if (skew > maxSkew) " check fails to find the skew 
> in this case.
> Please Note: This was tested in hbase 0.94.11 version and the trunk also 
> currently has the same logic.
> The fix for the same would be to make the skew positive value first as below:
>  long skew = System.currentTimeMillis() - serverCurrentTime;
> skew = (skew < 0 ? -skew : skew);
> if (skew > maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9818) NPE in HFileBlock#AbstractFSReader#readAtOffset

2013-11-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814940#comment-13814940
 ] 

Ted Yu commented on HBASE-9818:
---

200 iterations of TestHRegion and TestAtomicOperation passed.


> NPE in HFileBlock#AbstractFSReader#readAtOffset
> ---
>
> Key: HBASE-9818
> URL: https://issues.apache.org/jira/browse/HBASE-9818
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Ted Yu
> Attachments: 9818-v2.txt, 9818-v3.txt, 9818-v4.txt
>
>
> HFileBlock#istream seems to be null.  I was wondering should we hide 
> FSDataInputStreamWrapper#useHBaseChecksum.
> By the way, this happened when online schema change is enabled (encoding)
> {noformat}
> 2013-10-22 10:58:43,321 ERROR [RpcServer.handler=28,port=36020] 
> regionserver.HRegionServer:
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1200)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1436)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:503)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:553)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:245)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:166)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:361)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:336)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:258)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:603)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:476)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:129)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3616)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3485)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3079)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:724)
> 2013-10-22 10:58:43,665 ERROR [RpcServer.handler=23,port=36020] 
> regionserver.HRegionServer:
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 53438 But the nextCallSeq got from client: 53437; 
> request=scanner_id: 1252577470624375060 number_of_rows: 100 close_scanner: 
> false next_call_seq: 53437
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3030)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:72

[jira] [Created] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HBASE-9903:
--

 Summary: Remove the jamon generated classes from the findbugs 
analysis
 Key: HBASE-9903
 URL: https://issues.apache.org/jira/browse/HBASE-9903
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0
 Attachments: 9903.v1.patch

The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9903:
---

Status: Patch Available  (was: Open)

> Remove the jamon generated classes from the findbugs analysis
> -
>
> Key: HBASE-9903
> URL: https://issues.apache.org/jira/browse/HBASE-9903
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0
>
> Attachments: 9903.v1.patch
>
>
> The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9903:
---

Attachment: 9903.v1.patch

> Remove the jamon generated classes from the findbugs analysis
> -
>
> Key: HBASE-9903
> URL: https://issues.apache.org/jira/browse/HBASE-9903
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0
>
> Attachments: 9903.v1.patch
>
>
> The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). -> Regionserver node time is greater than master node time

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814999#comment-13814999
 ] 

Nicolas Liochon commented on HBASE-9902:


bq. we can use change as below.
That would do it :-)
Do you want to submit a patch, [~jyothi.mandava]?

> Region Server is starting normally even if clock skew is more than default 30 
> seconds(or any configured). -> Regionserver node time is greater than master 
> node time
> 
>
> Key: HBASE-9902
> URL: https://issues.apache.org/jira/browse/HBASE-9902
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: Kashif J S
>
> When Region server's time is ahead of Master's time and the difference is 
> more than hbase.master.maxclockskew value, region server startup is not 
> failing with ClockOutOfSyncException.
> This causes some abnormal behavior as detected by our Tests.
> ServerManager.java#checkClockSkew
>   long skew = System.currentTimeMillis() - serverCurrentTime;
> if (skew > maxSkew) {
>   String message = "Server " + serverName + " has been " +
> "rejected; Reported time is too far out of sync with master.  " +
> "Time difference of " + skew + "ms > max allowed of " + maxSkew + 
> "ms";
>   LOG.warn(message);
>   throw new ClockOutOfSyncException(message);
> }
> Above line results in negative value when Master's time is lesser than 
> region server time and  " if (skew > maxSkew) " check fails to find the skew 
> in this case.
> Please Note: This was tested in hbase 0.94.11 version and the trunk also 
> currently has the same logic.
> The fix for the same would be to make the skew positive value first as below:
>  long skew = System.currentTimeMillis() - serverCurrentTime;
> skew = (skew < 0 ? -skew : skew);
> if (skew > maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9792) Region states should update last assignments when a region is opened.

2013-11-06 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-9792:
---

Attachment: trunk-9792_v3.1.patch

Attached patch v3.1, added some comments.

> Region states should update last assignments when a region is opened.
> -
>
> Key: HBASE-9792
> URL: https://issues.apache.org/jira/browse/HBASE-9792
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: trunk-9792.patch, trunk-9792_v2.patch, 
> trunk-9792_v3.1.patch, trunk-9792_v3.patch
>
>
> Currently, we update a region's last assignment region server when the region 
> is online.  We should do this sooner, when the region is moved to OPEN state. 
>  CM could kill this region server before we delete the znode and online the 
> region.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815018#comment-13815018
 ] 

Hadoop QA commented on HBASE-9903:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612380/9903.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.TestZooKeeper.testRegionAssignmentAfterMasterRecoveryDueToZKExpiry(TestZooKeeper.java:488)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7753//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7753//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7753//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7753//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7753//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7753//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7753//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7753//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7753//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7753//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7753//console

This message is automatically generated.

> Remove the jamon generated classes from the findbugs analysis
> -
>
> Key: HBASE-9903
> URL: https://issues.apache.org/jira/browse/HBASE-9903
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0
>
> Attachments: 9903.v1.patch
>
>
> The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815021#comment-13815021
 ] 

Nicolas Liochon commented on HBASE-9903:


It worked. There is org.apache.hadoop.hbase.generated that I should be able to 
remove as well.



> Remove the jamon generated classes from the findbugs analysis
> -
>
> Key: HBASE-9903
> URL: https://issues.apache.org/jira/browse/HBASE-9903
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0
>
> Attachments: 9903.v1.patch
>
>
> The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815026#comment-13815026
 ] 

Nicolas Liochon commented on HBASE-9885:


All this could be attributed to the usual flakiness. I think I'm going to 
commit the last version. Someone disagrees?

> Avoid some Result creation in protobuf conversions
> --
>
> Key: HBASE-9885
> URL: https://issues.apache.org/jira/browse/HBASE-9885
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
> 9885.v3.patch
>
>
> We creates a lot of Result that we could avoid, as they contain nothing else 
> than a boolean value. We create sometimes a protobuf builder as well on this 
> path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9903:
---

Status: Patch Available  (was: Open)

> Remove the jamon generated classes from the findbugs analysis
> -
>
> Key: HBASE-9903
> URL: https://issues.apache.org/jira/browse/HBASE-9903
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0
>
> Attachments: 9903.v1.patch, 9903.v2.patch
>
>
> The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9903:
---

Status: Open  (was: Patch Available)

> Remove the jamon generated classes from the findbugs analysis
> -
>
> Key: HBASE-9903
> URL: https://issues.apache.org/jira/browse/HBASE-9903
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0
>
> Attachments: 9903.v1.patch, 9903.v2.patch
>
>
> The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9903:
---

Attachment: 9903.v2.patch

> Remove the jamon generated classes from the findbugs analysis
> -
>
> Key: HBASE-9903
> URL: https://issues.apache.org/jira/browse/HBASE-9903
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0
>
> Attachments: 9903.v1.patch, 9903.v2.patch
>
>
> The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9863) Intermittently TestZooKeeper#testRegionAssignmentAfterMasterRecoveryDueToZKExpiry hangs in admin#createTable() call

2013-11-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9863:
--

Summary: Intermittently 
TestZooKeeper#testRegionAssignmentAfterMasterRecoveryDueToZKExpiry hangs in 
admin#createTable() call  (was: Intermittently 
TestZooKeeper#testRegionAssignmentAfterMasterRecoveryDueToZKExpiry hangs)

> Intermittently 
> TestZooKeeper#testRegionAssignmentAfterMasterRecoveryDueToZKExpiry hangs in 
> admin#createTable() call
> ---
>
> Key: HBASE-9863
> URL: https://issues.apache.org/jira/browse/HBASE-9863
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 9863-v1.txt, 9863-v2.txt, 9863-v3.txt, 9863-v4.txt, 
> 9863-v5.txt, 9863-v6.txt
>
>
> TestZooKeeper#testRegionAssignmentAfterMasterRecoveryDueToZKExpiry sometimes 
> hung.
> Here were two recent occurrences:
> https://builds.apache.org/job/PreCommit-HBASE-Build/7676/console
> https://builds.apache.org/job/PreCommit-HBASE-Build/7671/console
> There were 9 occurrences of the following in both stack traces:
> {code}
> "FifoRpcScheduler.handler1-thread-5" daemon prio=10 tid=0x09df8800 nid=0xc17 
> waiting for monitor entry [0x6fdf8000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.isTableAvailableAndInitialized(TableNamespaceManager.java:250)
>   - waiting to lock <0x7f69b5f0> (a 
> org.apache.hadoop.hbase.master.TableNamespaceManager)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isTableNamespaceManagerReady(HMaster.java:3146)
>   at 
> org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3105)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1743)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1782)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:38221)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1983)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> {code}
> The test hung here:
> {code}
> "pool-1-thread-1" prio=10 tid=0x74f7b800 nid=0x5aa5 in Object.wait() 
> [0x74efe000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xcc848348> (a org.apache.hadoop.hbase.ipc.RpcClient$Call)
>   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1436)
>   - locked <0xcc848348> (a org.apache.hadoop.hbase.ipc.RpcClient$Call)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1654)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1712)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:40372)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$5.createTable(HConnectionManager.java:1931)
>   at org.apache.hadoop.hbase.client.HBaseAdmin$2.call(HBaseAdmin.java:598)
>   at org.apache.hadoop.hbase.client.HBaseAdmin$2.call(HBaseAdmin.java:594)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:116)
>   - locked <0x7faa26d0> (a org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:94)
>   - locked <0x7faa26d0> (a org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3124)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:594)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:485)
>   at 
> org.apache.hadoop.hbase.TestZooKeeper.testRegionAssignmentAfterMasterRecoveryDueToZKExpiry(TestZooKeeper.java:486)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9873) Some improvements in hlog and hlog split

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815039#comment-13815039
 ] 

Nicolas Liochon commented on HBASE-9873:


bq.  Actually, we want to intro a speculative scheduler for hlog tasks, as the 
speculative scheduler for map/reduce tasks in mapreduce.
Note that there is a new algo implemented in HBASE-7006, allows to have writes 
during the recovery. This algo is not really suitable for speculative 
execution, because the writes are always executed on the same machines, so 
adding executions would likely slow down the process. Ok that's not for 0.94

bq. Rely on the smallest of all biggest hfile's seqId of previous served 
regions to ignore some entries. Facebook have implemented this in HBASE-6508 
and we backport it to hbase 0.94 in HBASE-9568.
Yep, this would be useful for sure (my understanding is that 0.96+ has it)

> Some improvements in hlog and hlog split
> 
>
> Key: HBASE-9873
> URL: https://issues.apache.org/jira/browse/HBASE-9873
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, wal
>Reporter: Liu Shaohui
>Priority: Critical
>  Labels: failover, hlog
>
> Some improvements in hlog and hlog split
> 1) Try to clean old hlog after each memstore flush to avoid unnecessary hlogs 
> split in failover.  Now hlogs cleaning only be run in rolling hlog writer. 
> 2) Add a background hlog compaction thread to compaction the hlog: remove the 
> hlog entries whose data have been flushed to hfile. The scenario is that in a 
> share cluster, write requests of a table may very little and periodical,  a 
> lots of hlogs can not be cleaned for entries of this table in those hlogs.
> 3) Rely on the smallest of all biggest hfile's seqId of previous served 
> regions to ignore some entries.  Facebook have implemented this in HBASE-6508 
> and we backport it to hbase 0.94 in HBASE-9568.
> 4) Support running multiple hlog splitters on a single RS and on 
> master(latter can boost split efficiency for tiny cluster)
> 5) Enable multiple splitters on 'big' hlog file by splitting(logically) hlog 
> to slices(configurable size, eg hdfs trunk size 64M)
> support concurrent multiple split tasks on a single hlog file slice 
> 6) Do not cancel the timeout split task until one task reports it succeeds 
> (avoids scenario where split for a hlog file fails due to no one task can 
> succeed within the timeout period ), and and reschedule a same split task to 
> reduce split time ( to avoid some straggler in hlog split)
> 7) Consider the hlog data locality when schedule the hlog split task.  
> Schedule the hlog to a splitter which is near to hlog data.
> 8) Support multi hlog writers and switching to another hlog writer when long 
> write latency to current hlog due to possible temporary network spike? 
> This is a draft which lists the improvements about hlog we try to implement 
> in the near future. Comments and discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815045#comment-13815045
 ] 

Nick Dimiduk commented on HBASE-9903:
-

The findbugs UI doesn't accept these regex patterns. This works via maven 
plugin?

> Remove the jamon generated classes from the findbugs analysis
> -
>
> Key: HBASE-9903
> URL: https://issues.apache.org/jira/browse/HBASE-9903
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0
>
> Attachments: 9903.v1.patch, 9903.v2.patch
>
>
> The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815048#comment-13815048
 ] 

Nicolas Liochon commented on HBASE-9903:


I've used the official findbugs doc 
(http://findbugs.sourceforge.net/manual/filter.html), not the one related to 
the plugin.
Then I've done the configuration by editing the xml file.
Then it worked (it seems) :-)


> Remove the jamon generated classes from the findbugs analysis
> -
>
> Key: HBASE-9903
> URL: https://issues.apache.org/jira/browse/HBASE-9903
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0
>
> Attachments: 9903.v1.patch, 9903.v2.patch
>
>
> The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815051#comment-13815051
 ] 

Nick Dimiduk commented on HBASE-9903:
-

Great!

Excluding generated sources I get. Presumably findbugs doesn't know know to 
parse the jamon templates...

+1

> Remove the jamon generated classes from the findbugs analysis
> -
>
> Key: HBASE-9903
> URL: https://issues.apache.org/jira/browse/HBASE-9903
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0
>
> Attachments: 9903.v1.patch, 9903.v2.patch
>
>
> The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815058#comment-13815058
 ] 

stack commented on HBASE-9885:
--

+1 on commit to 0.96 and trunk.

> Avoid some Result creation in protobuf conversions
> --
>
> Key: HBASE-9885
> URL: https://issues.apache.org/jira/browse/HBASE-9885
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
> 9885.v3.patch
>
>
> We creates a lot of Result that we could avoid, as they contain nothing else 
> than a boolean value. We create sometimes a protobuf builder as well on this 
> path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9901) Add a toString in HTable, fix a log in AssignmentManager

2013-11-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815061#comment-13815061
 ] 

stack commented on HBASE-9901:
--

The toString is ugly.  Fix on commit:

+return "HTable{" + "connection=" + connection + ", tableName=" + tableName 
+ '}';

Make it just:

connection + "," + tableName

It will look like:

hconnection-0x020234343,bigtable

Or 

bigtable,hconnection-0x020234343

Our logs are too profuse already -- they need paring.  Let the above be the 
convention for tablename string.  No need of having the '{' and the HTable 
preamble?

If you do above, +1 on patch for branch and trunk





> Add a toString in HTable, fix a log in AssignmentManager
> 
>
> Key: HBASE-9901
> URL: https://issues.apache.org/jira/browse/HBASE-9901
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9901.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9818) NPE in HFileBlock#AbstractFSReader#readAtOffset

2013-11-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9818:
--

Attachment: 9818-v5.txt

Patch v5 fixes javadoc warning

> NPE in HFileBlock#AbstractFSReader#readAtOffset
> ---
>
> Key: HBASE-9818
> URL: https://issues.apache.org/jira/browse/HBASE-9818
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Ted Yu
> Attachments: 9818-v2.txt, 9818-v3.txt, 9818-v4.txt, 9818-v5.txt
>
>
> HFileBlock#istream seems to be null.  I was wondering should we hide 
> FSDataInputStreamWrapper#useHBaseChecksum.
> By the way, this happened when online schema change is enabled (encoding)
> {noformat}
> 2013-10-22 10:58:43,321 ERROR [RpcServer.handler=28,port=36020] 
> regionserver.HRegionServer:
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1200)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1436)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:503)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:553)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:245)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:166)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:361)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:336)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:258)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:603)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:476)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:129)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3616)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3485)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3079)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:724)
> 2013-10-22 10:58:43,665 ERROR [RpcServer.handler=23,port=36020] 
> regionserver.HRegionServer:
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 53438 But the nextCallSeq got from client: 53437; 
> request=scanner_id: 1252577470624375060 number_of_rows: 100 close_scanner: 
> false next_call_seq: 53437
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3030)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:724)
> {noformat}



--
This message was sent by 

[jira] [Commented] (HBASE-8323) Low hanging checksum improvements

2013-11-06 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815068#comment-13815068
 ] 

Todd Lipcon commented on HBASE-8323:


You probably want to use it via DataChecksum, which is already a public class. 
It has the right logic to fallback to the Java implementation if the native one 
isn't available.

> Low hanging checksum improvements
> -
>
> Key: HBASE-8323
> URL: https://issues.apache.org/jira/browse/HBASE-8323
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: Enis Soztutar
>
> Over at Hadoop land, [~tlipcon] had done some improvements for checksums, a 
> native implementation for CRC32C (HADOOP-7445) and bulk verify of checksums 
> (HADOOP-7444). 
> In HBase, we can do
>  - Also develop a bulk verify API. Regardless of 
> hbase.hstore.bytes.per.checksum we always want to verify of the whole 
> checksum for the hfile block.
>  - Enable NativeCrc32 to be used as a checksum algo. It is not clear how much 
> gain we can expect over pure java CRC32. 
> Though, longer term we should focus on convincing hdfs guys for inline 
> checksums (HDFS-2699)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9792) Region states should update last assignments when a region is opened.

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815072#comment-13815072
 ] 

Hadoop QA commented on HBASE-9792:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612394/trunk-9792_v3.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7754//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7754//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7754//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7754//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7754//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7754//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7754//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7754//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7754//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7754//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7754//console

This message is automatically generated.

> Region states should update last assignments when a region is opened.
> -
>
> Key: HBASE-9792
> URL: https://issues.apache.org/jira/browse/HBASE-9792
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: trunk-9792.patch, trunk-9792_v2.patch, 
> trunk-9792_v3.1.patch, trunk-9792_v3.patch
>
>
> Currently, we update a region's last assignment region server when the region 
> is online.  We should do this sooner, when the region is moved to OPEN state. 
>  CM could kill this region server before we delete the znode and online the 
> region.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-7663) [Per-KV security] Visibility labels

2013-11-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815075#comment-13815075
 ] 

stack commented on HBASE-7663:
--

Skimming the patch...

Should setAuthorizations and getAuthorizations be pushed up to the super class 
or do they only apply to certain 'types' -- like setCellVisibility and 
getCellVisibility.  Or seems like they are for Get and Scan only.  Should we 
have an Interface that is other-than-Mutation that Scan and Get implement (and 
Increment i suppose since it retturns a value)?  We'd add these methods there?

An illegal operation is different to an AccessDeniedE?  It is not necessarily 
of the security realm?

Does CellVisibility need a class comment?  Or maybe it is ok given it is in the 
visibility package and it is called CellVisibility (no need to be pedantic)

Ok... let me go look at your responses up on RB now.. I realize I did not 
go back to them.



> [Per-KV security] Visibility labels
> ---
>
> Key: HBASE-7663
> URL: https://issues.apache.org/jira/browse/HBASE-7663
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Anoop Sam John
> Fix For: 0.98.0
>
> Attachments: HBASE-7663.patch, HBASE-7663_V2.patch, 
> HBASE-7663_V3.patch, HBASE-7663_V4.patch, HBASE-7663_V5.patch, 
> HBASE-7663_V6.patch
>
>
> Implement Accumulo-style visibility labels. Consider the following design 
> principles:
> - Coprocessor based implementation
> - Minimal to no changes to core code
> - Use KeyValue tags (HBASE-7448) to carry labels
> - Use OperationWithAttributes# {get,set}Attribute for handling visibility 
> labels in the API
> - Implement a new filter for evaluating visibility labels as KVs are streamed 
> through.
> This approach would be consistent in deployment and API details with other 
> per-KV security work, supporting environments where they might be both be 
> employed, even stacked on some tables.
> See the parent issue for more discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9901) Add a toString in HTable, fix a log in AssignmentManager

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815078#comment-13815078
 ] 

Nicolas Liochon commented on HBASE-9901:


bq.  No need of having the '{' and the HTable preamble
In this case, this was generated by IntelliJ, so it's accepted as common 
somewhere, likely :-).

But np to do something different, of course. 

I'm going to commit the bigtable,hconnection-0x020234343 one.

> Add a toString in HTable, fix a log in AssignmentManager
> 
>
> Key: HBASE-9901
> URL: https://issues.apache.org/jira/browse/HBASE-9901
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9901.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9901) Add a toString in HTable, fix a log in AssignmentManager

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9901:
---

Attachment: 9901.v2.patch

> Add a toString in HTable, fix a log in AssignmentManager
> 
>
> Key: HBASE-9901
> URL: https://issues.apache.org/jira/browse/HBASE-9901
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9901.v1.patch, 9901.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9888) HBase replicates edits written before the replication peer is created

2013-11-06 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815084#comment-13815084
 ] 

Jean-Daniel Cryans commented on HBASE-9888:
---

In 0.94, HLogKey has a {{writeTime}} and we could seek in the current WAL until 
we find an edit that's been written after the source was created. It's still 
fuzzy since the time that each source actually gets created will differ for 
each RS, but at least you wouldn't start replicating old edits.

> HBase replicates edits written before the replication peer is created
> -
>
> Key: HBASE-9888
> URL: https://issues.apache.org/jira/browse/HBASE-9888
> Project: HBase
>  Issue Type: Bug
>Reporter: Dave Latham
>
> When creating a new replication peer the ReplicationSourceManager enqueues 
> the currently open HLog to the ReplicationSource to ship to the destination 
> cluster.  The ReplicationSource starts at the beginning of the HLog and ships 
> over any pre-existing writes.
> A workaround is to roll all the HLogs before enabling replication.
> A little background for how it affected us - we were migrating one cluster in 
> a master-master pair.  I.e. transitioning from A <\-> B to B <-> C.  After 
> shutting down writes from A -> B we enabled writes from C -> B.  However, 
> this replicated some earlier writes that were in C's HLogs that had 
> originated in A.  Since we were running a version of HBase before HBASE-7709 
> those writes then got caught in a infinite replication cycle and bringing 
> down region servers OOM because of HBASE-9865.
> However, in general, if one wants to manage what data gets replicated, one 
> wouldn't expect that potentially very old writes would be included when 
> setting up a new replication link.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9865) WALEdit.heapSize() is incorrect in certain replication scenarios which may cause RegionServers to go OOM

2013-11-06 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815094#comment-13815094
 ] 

Jean-Daniel Cryans commented on HBASE-9865:
---

+1, would wait for churro's cluster testing before committing.

> WALEdit.heapSize() is incorrect in certain replication scenarios which may 
> cause RegionServers to go OOM
> 
>
> Key: HBASE-9865
> URL: https://issues.apache.org/jira/browse/HBASE-9865
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.5, 0.95.0
>Reporter: churro morales
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9865-0.94-v2.txt, 9865-0.94-v4.txt, 9865-sample-1.txt, 
> 9865-sample.txt, 9865-trunk-v2.txt, 9865-trunk-v3.txt, 9865-trunk-v4.txt, 
> 9865-trunk.txt
>
>
> WALEdit.heapSize() is incorrect in certain replication scenarios which may 
> cause RegionServers to go OOM.
> A little background on this issue.  We noticed that our source replication 
> regionservers would get into gc storms and sometimes even OOM. 
> We noticed a case where it showed that there were around 25k WALEdits to 
> replicate, each one with an ArrayList of KeyValues.  The array list had a 
> capacity of around 90k (using 350KB of heap memory) but had around 6 non null 
> entries.
> When the ReplicationSource.readAllEntriesToReplicateOrNextFile() gets a 
> WALEdit it removes all kv's that are scoped other than local.  
> But in doing so we don't account for the capacity of the ArrayList when 
> determining heapSize for a WALEdit.  The logic for shipping a batch is 
> whether you have hit a size capacity or number of entries capacity.  
> Therefore if have a WALEdit with 25k entries and suppose all are removed: 
> The size of the arrayList is 0 (we don't even count the collection's heap 
> size currently) but the capacity is ignored.
> This will yield a heapSize() of 0 bytes while in the best case it would be at 
> least 10 bytes (provided you pass initialCapacity and you have 32 bit 
> JVM) 
> I have some ideas on how to address this problem and want to know everyone's 
> thoughts:
> 1. We use a probabalistic counter such as HyperLogLog and create something 
> like:
>   * class CapacityEstimateArrayList implements ArrayList
>   ** this class overrides all additive methods to update the 
> probabalistic counts
>   ** it includes one additional method called estimateCapacity 
> (we would take estimateCapacity - size() and fill in sizes for all references)
>   * Then we can do something like this in WALEdit.heapSize:
>   
> {code}
>   public long heapSize() {
> long ret = ClassSize.ARRAYLIST;
> for (KeyValue kv : kvs) {
>   ret += kv.heapSize();
> }
> long nullEntriesEstimate = kvs.getCapacityEstimate() - kvs.size();
> ret += ClassSize.align(nullEntriesEstimate * ClassSize.REFERENCE);
> if (scopes != null) {
>   ret += ClassSize.TREEMAP;
>   ret += ClassSize.align(scopes.size() * ClassSize.MAP_ENTRY);
>   // TODO this isn't quite right, need help here
> }
> return ret;
>   }   
> {code}
> 2. In ReplicationSource.removeNonReplicableEdits() we know the size of the 
> array originally, and we provide some percentage threshold.  When that 
> threshold is met (50% of the entries have been removed) we can call 
> kvs.trimToSize()
> 3. in the heapSize() method for WALEdit we could use reflection (Please don't 
> shoot me for this) to grab the actual capacity of the list.  Doing something 
> like this:
> {code}
> public int getArrayListCapacity()  {
> try {
>   Field f = ArrayList.class.getDeclaredField("elementData");
>   f.setAccessible(true);
>   return ((Object[]) f.get(kvs)).length;
> } catch (Exception e) {
>   log.warn("Exception in trying to get capacity on ArrayList", e);
>   return kvs.size();
> }
> {code}
> I am partial to (1) using HyperLogLog and creating a 
> CapacityEstimateArrayList, this is reusable throughout the code for other 
> classes that implement HeapSize which contains ArrayLists.  The memory 
> footprint is very small and it is very fast.  The issue is that this is an 
> estimate, although we can configure the precision we most likely always be 
> conservative.  The estimateCapacity will always be less than the 
> actualCapacity, but it will be close. I think that putting the logic in 
> removeNonReplicableEdits will work, but this only solves the heapSize problem 
> in this particular scenario.  Solution 3 is slow and horrible but that gives 
> us the exact answer.
> I would love to hear if anyone else has any other ideas on how to remedy this 
> problem?  I have code for trunk and 0.94 for all 3 ideas and can provide a 
> patch i

[jira] [Commented] (HBASE-7663) [Per-KV security] Visibility labels

2013-11-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815107#comment-13815107
 ] 

Andrew Purtell commented on HBASE-7663:
---

bq. Should we have an Interface that is other-than-Mutation that Scan and Get 
implement (and Increment i suppose since it retturns a value)? 

I have the same issue over on the cell ACL patch, I need to duplicate these 
convenience getters and setters in Get, Mutation, and Scan. It would be good to 
have a common interface or base class for Scan and Get, maybe 'Query' (for 
symmetry with Mutation)? I have fun in places receiving OperationWithAttributes 
and then downcasting, that would go away.

> [Per-KV security] Visibility labels
> ---
>
> Key: HBASE-7663
> URL: https://issues.apache.org/jira/browse/HBASE-7663
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Anoop Sam John
> Fix For: 0.98.0
>
> Attachments: HBASE-7663.patch, HBASE-7663_V2.patch, 
> HBASE-7663_V3.patch, HBASE-7663_V4.patch, HBASE-7663_V5.patch, 
> HBASE-7663_V6.patch
>
>
> Implement Accumulo-style visibility labels. Consider the following design 
> principles:
> - Coprocessor based implementation
> - Minimal to no changes to core code
> - Use KeyValue tags (HBASE-7448) to carry labels
> - Use OperationWithAttributes# {get,set}Attribute for handling visibility 
> labels in the API
> - Implement a new filter for evaluating visibility labels as KVs are streamed 
> through.
> This approach would be consistent in deployment and API details with other 
> per-KV security work, supporting environments where they might be both be 
> employed, even stacked on some tables.
> See the parent issue for more discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9888) HBase replicates edits written before the replication peer is created

2013-11-06 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815111#comment-13815111
 ] 

Dave Latham commented on HBASE-9888:


{quote}
So in your case the replication was enabled at the master cluster and the table 
cf was also replication enabled (scope>0) At a later point u added a peer to 
the master and you can see the older edits also getting replicated. The 
scenario is correct?
{quote}
Yes, that's correct.

{quote}
Did you consider writing a replication controller to be deployed on the sink 
cluster ?
{quote}
Nope, didn't consider it because we weren't aware of the issue ahead of time.  
We ended up resolving the infinite replication cycle by introducing a patch to 
allow configuring specific cluster UUIDs whose edits should not be replication. 
 For the future, I'd prefer a built in improvement.

{quote}
In 0.94, HLogKey has a writeTime and we could seek in the current WAL until we 
find an edit that's been written after the source was created. It's still fuzzy 
since the time that each source actually gets created will differ for each RS, 
but at least you wouldn't start replicating old edits.
{quote}
That sounds great.  Is that 0.94 only or do the newer versions also have it?  
Do you have an idea where the minimum timestamp would be generated?  Would it 
work to just do it in each RS when the ReplicationSource on that RS is created 
(in the mode for add_peer)?

Alternatively, should each RS roll its HLog when creating a new peer?

> HBase replicates edits written before the replication peer is created
> -
>
> Key: HBASE-9888
> URL: https://issues.apache.org/jira/browse/HBASE-9888
> Project: HBase
>  Issue Type: Bug
>Reporter: Dave Latham
>
> When creating a new replication peer the ReplicationSourceManager enqueues 
> the currently open HLog to the ReplicationSource to ship to the destination 
> cluster.  The ReplicationSource starts at the beginning of the HLog and ships 
> over any pre-existing writes.
> A workaround is to roll all the HLogs before enabling replication.
> A little background for how it affected us - we were migrating one cluster in 
> a master-master pair.  I.e. transitioning from A <\-> B to B <-> C.  After 
> shutting down writes from A -> B we enabled writes from C -> B.  However, 
> this replicated some earlier writes that were in C's HLogs that had 
> originated in A.  Since we were running a version of HBase before HBASE-7709 
> those writes then got caught in a infinite replication cycle and bringing 
> down region servers OOM because of HBASE-9865.
> However, in general, if one wants to manage what data gets replicated, one 
> wouldn't expect that potentially very old writes would be included when 
> setting up a new replication link.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-7662) [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags

2013-11-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815115#comment-13815115
 ] 

Andrew Purtell commented on HBASE-7662:
---

{quote}
Regression: 
org.apache.hadoop.hbase.security.access.TestAccessControlFilter.testQualifierAccess
java.lang.AssertionError: Expected 100 rows returned expected:<100> but was:<0>
{quote}

Fallout from a recent rebase, looking into it.

> [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags
> ---
>
> Key: HBASE-7662
> URL: https://issues.apache.org/jira/browse/HBASE-7662
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 7662.patch, 7662.patch, 7662.patch, 7662.patch
>
>
> We can improve the performance of per-cell authorization if the read of the 
> cell ACL, if any, is combined with the sequential read of the cell data 
> already in progress. When tags are inlined with KVs in block encoding (see 
> HBASE-7448, and more generally HBASE-7233), we can use them to carry cell 
> ACLs instead of using out-of-line storage (HBASE-7661) for that purpose.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815119#comment-13815119
 ] 

Hadoop QA commented on HBASE-9903:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612400/9903.v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7755//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7755//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7755//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7755//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7755//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7755//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7755//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7755//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7755//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7755//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7755//console

This message is automatically generated.

> Remove the jamon generated classes from the findbugs analysis
> -
>
> Key: HBASE-9903
> URL: https://issues.apache.org/jira/browse/HBASE-9903
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0
>
> Attachments: 9903.v1.patch, 9903.v2.patch
>
>
> The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9901) Add a toString in HTable, fix a log in AssignmentManager

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9901:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add a toString in HTable, fix a log in AssignmentManager
> 
>
> Key: HBASE-9901
> URL: https://issues.apache.org/jira/browse/HBASE-9901
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9901.v1.patch, 9901.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9885:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed, thanks for the review, Stack!

> Avoid some Result creation in protobuf conversions
> --
>
> Key: HBASE-9885
> URL: https://issues.apache.org/jira/browse/HBASE-9885
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
> 9885.v3.patch
>
>
> We creates a lot of Result that we could avoid, as they contain nothing else 
> than a boolean value. We create sometimes a protobuf builder as well on this 
> path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9904) Solve skipping data in HTable scans

2013-11-06 Thread Manukranth Kolloju (JIRA)
Manukranth Kolloju created HBASE-9904:
-

 Summary: Solve skipping data in HTable scans
 Key: HBASE-9904
 URL: https://issues.apache.org/jira/browse/HBASE-9904
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.89-fb
Reporter: Manukranth Kolloju
 Fix For: 0.89-fb


The HTable client cannot retry a scan operation in the 
getRegionServerWithRetries code path.
This will result in the client missing data. This can be worked around using 
hbase.client.retries.number to 1.

The whole problem is that Callable knows nothing about retries and the protocol 
it dances to as well doesn't support retires.
This fix will keep Callable protocol (ugly thing worth merciless refactoring) 
intact but will change
ScannerCallable to anticipate retries. What we want is to make failed 
operations to be identities for outside world:
N1 , N2 , F3 , N3 , F4 , F4 , N4 ... = N1 , N2 , N3 , N4 ...
where Nk are successful operation and Fk are failed operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9904) Solve skipping data in HTable scans

2013-11-06 Thread Manukranth Kolloju (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manukranth Kolloju updated HBASE-9904:
--

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

> Solve skipping data in HTable scans
> ---
>
> Key: HBASE-9904
> URL: https://issues.apache.org/jira/browse/HBASE-9904
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.89-fb
>Reporter: Manukranth Kolloju
> Fix For: 0.89-fb
>
>
> The HTable client cannot retry a scan operation in the 
> getRegionServerWithRetries code path.
> This will result in the client missing data. This can be worked around using 
> hbase.client.retries.number to 1.
> The whole problem is that Callable knows nothing about retries and the 
> protocol it dances to as well doesn't support retires.
> This fix will keep Callable protocol (ugly thing worth merciless refactoring) 
> intact but will change
> ScannerCallable to anticipate retries. What we want is to make failed 
> operations to be identities for outside world:
> N1 , N2 , F3 , N3 , F4 , F4 , N4 ... = N1 , N2 , N3 , N4 ...
> where Nk are successful operation and Fk are failed operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9904) Solve skipping data in HTable scans

2013-11-06 Thread Manukranth Kolloju (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manukranth Kolloju updated HBASE-9904:
--

Attachment: scan.diff

> Solve skipping data in HTable scans
> ---
>
> Key: HBASE-9904
> URL: https://issues.apache.org/jira/browse/HBASE-9904
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.89-fb
>Reporter: Manukranth Kolloju
> Fix For: 0.89-fb
>
> Attachments: scan.diff
>
>
> The HTable client cannot retry a scan operation in the 
> getRegionServerWithRetries code path.
> This will result in the client missing data. This can be worked around using 
> hbase.client.retries.number to 1.
> The whole problem is that Callable knows nothing about retries and the 
> protocol it dances to as well doesn't support retires.
> This fix will keep Callable protocol (ugly thing worth merciless refactoring) 
> intact but will change
> ScannerCallable to anticipate retries. What we want is to make failed 
> operations to be identities for outside world:
> N1 , N2 , F3 , N3 , F4 , F4 , N4 ... = N1 , N2 , N3 , N4 ...
> where Nk are successful operation and Fk are failed operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9865) WALEdit.heapSize() is incorrect in certain replication scenarios which may cause RegionServers to go OOM

2013-11-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815162#comment-13815162
 ] 

Lars Hofhansl commented on HBASE-9865:
--

If I get time I might write a microtest comparing the reuse approach with new 
allocations and varying batch sizes.

> WALEdit.heapSize() is incorrect in certain replication scenarios which may 
> cause RegionServers to go OOM
> 
>
> Key: HBASE-9865
> URL: https://issues.apache.org/jira/browse/HBASE-9865
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.5, 0.95.0
>Reporter: churro morales
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9865-0.94-v2.txt, 9865-0.94-v4.txt, 9865-sample-1.txt, 
> 9865-sample.txt, 9865-trunk-v2.txt, 9865-trunk-v3.txt, 9865-trunk-v4.txt, 
> 9865-trunk.txt
>
>
> WALEdit.heapSize() is incorrect in certain replication scenarios which may 
> cause RegionServers to go OOM.
> A little background on this issue.  We noticed that our source replication 
> regionservers would get into gc storms and sometimes even OOM. 
> We noticed a case where it showed that there were around 25k WALEdits to 
> replicate, each one with an ArrayList of KeyValues.  The array list had a 
> capacity of around 90k (using 350KB of heap memory) but had around 6 non null 
> entries.
> When the ReplicationSource.readAllEntriesToReplicateOrNextFile() gets a 
> WALEdit it removes all kv's that are scoped other than local.  
> But in doing so we don't account for the capacity of the ArrayList when 
> determining heapSize for a WALEdit.  The logic for shipping a batch is 
> whether you have hit a size capacity or number of entries capacity.  
> Therefore if have a WALEdit with 25k entries and suppose all are removed: 
> The size of the arrayList is 0 (we don't even count the collection's heap 
> size currently) but the capacity is ignored.
> This will yield a heapSize() of 0 bytes while in the best case it would be at 
> least 10 bytes (provided you pass initialCapacity and you have 32 bit 
> JVM) 
> I have some ideas on how to address this problem and want to know everyone's 
> thoughts:
> 1. We use a probabalistic counter such as HyperLogLog and create something 
> like:
>   * class CapacityEstimateArrayList implements ArrayList
>   ** this class overrides all additive methods to update the 
> probabalistic counts
>   ** it includes one additional method called estimateCapacity 
> (we would take estimateCapacity - size() and fill in sizes for all references)
>   * Then we can do something like this in WALEdit.heapSize:
>   
> {code}
>   public long heapSize() {
> long ret = ClassSize.ARRAYLIST;
> for (KeyValue kv : kvs) {
>   ret += kv.heapSize();
> }
> long nullEntriesEstimate = kvs.getCapacityEstimate() - kvs.size();
> ret += ClassSize.align(nullEntriesEstimate * ClassSize.REFERENCE);
> if (scopes != null) {
>   ret += ClassSize.TREEMAP;
>   ret += ClassSize.align(scopes.size() * ClassSize.MAP_ENTRY);
>   // TODO this isn't quite right, need help here
> }
> return ret;
>   }   
> {code}
> 2. In ReplicationSource.removeNonReplicableEdits() we know the size of the 
> array originally, and we provide some percentage threshold.  When that 
> threshold is met (50% of the entries have been removed) we can call 
> kvs.trimToSize()
> 3. in the heapSize() method for WALEdit we could use reflection (Please don't 
> shoot me for this) to grab the actual capacity of the list.  Doing something 
> like this:
> {code}
> public int getArrayListCapacity()  {
> try {
>   Field f = ArrayList.class.getDeclaredField("elementData");
>   f.setAccessible(true);
>   return ((Object[]) f.get(kvs)).length;
> } catch (Exception e) {
>   log.warn("Exception in trying to get capacity on ArrayList", e);
>   return kvs.size();
> }
> {code}
> I am partial to (1) using HyperLogLog and creating a 
> CapacityEstimateArrayList, this is reusable throughout the code for other 
> classes that implement HeapSize which contains ArrayLists.  The memory 
> footprint is very small and it is very fast.  The issue is that this is an 
> estimate, although we can configure the precision we most likely always be 
> conservative.  The estimateCapacity will always be less than the 
> actualCapacity, but it will be close. I think that putting the logic in 
> removeNonReplicableEdits will work, but this only solves the heapSize problem 
> in this particular scenario.  Solution 3 is slow and horrible but that gives 
> us the exact answer.
> I would love to hear if anyone else has any other ideas on how to remedy this 
> problem?  I have code for trunk and 0.94 f

[jira] [Commented] (HBASE-9904) Solve skipping data in HTable scans

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815169#comment-13815169
 ] 

Hadoop QA commented on HBASE-9904:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612417/scan.diff
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7758//console

This message is automatically generated.

> Solve skipping data in HTable scans
> ---
>
> Key: HBASE-9904
> URL: https://issues.apache.org/jira/browse/HBASE-9904
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.89-fb
>Reporter: Manukranth Kolloju
> Fix For: 0.89-fb
>
> Attachments: scan.diff
>
>
> The HTable client cannot retry a scan operation in the 
> getRegionServerWithRetries code path.
> This will result in the client missing data. This can be worked around using 
> hbase.client.retries.number to 1.
> The whole problem is that Callable knows nothing about retries and the 
> protocol it dances to as well doesn't support retires.
> This fix will keep Callable protocol (ugly thing worth merciless refactoring) 
> intact but will change
> ScannerCallable to anticipate retries. What we want is to make failed 
> operations to be identities for outside world:
> N1 , N2 , F3 , N3 , F4 , F4 , N4 ... = N1 , N2 , N3 , N4 ...
> where Nk are successful operation and Fk are failed operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9892) Add info port to ServerName to support multi instances in a node

2013-11-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815174#comment-13815174
 ] 

Steve Loughran commented on HBASE-9892:
---

As Enis says, currently we know the problem is there but don't try to fix it. 

The issue we have there is not just if YARN assigns >1 region server to the 
same node (it doesn't currently support anti-affinity in allocation requests), 
but that someone else may be running their own application, HBase or otherwise, 
on the same machine. If you hard code a port it can fail -any port. The sole 
advantage we have is that will trigger a new container request/review

Because this also affects the masters, we have to leave that UI at port 0 too 
-which is the worst issue. I would really like to get hold of that via ZK, from 
where we can bootstrap the rest of the cluster information 

> Add info port to ServerName to support multi instances in a node
> 
>
> Key: HBASE-9892
> URL: https://issues.apache.org/jira/browse/HBASE-9892
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Attachments: HBASE-9892-0.94-v1.diff, HBASE-9892-0.94-v2.diff, 
> HBASE-9892-0.94-v3.diff
>
>
> The full GC time of  regionserver with big heap(> 30G ) usually  can not be 
> controlled in 30s. At the same time, the servers with 64G memory are normal. 
> So we try to deploy multi rs instances(2-3 ) in a single node and the heap of 
> each rs is about 20G ~ 24G.
> Most of the things works fine, except the hbase web ui. The master get the RS 
> info port from conf, which is suitable for this situation of multi rs  
> instances in a node. So we add info port to ServerName.
> a. at the startup, rs report it's info port to Hmaster.
> b, For root region, rs write the servername with info port ro the zookeeper 
> root-region-server node.
> c, For meta regions, rs write the servername with info port to root region 
> d. For user regions,  rs write the servername with info port to meta regions 
> So hmaster and client can get info port from the servername.
> To test this feature, I change the rs num from 1 to 3 in standalone mode, so 
> we can test it in standalone mode,
> I think Hoya(hbase on yarn) will encounter the same problem.  Anyone knows 
> how Hoya handle this problem?
> PS: There are  different formats for servername in zk node and meta table, i 
> think we need to unify it and refactor the code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9818) NPE in HFileBlock#AbstractFSReader#readAtOffset

2013-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815176#comment-13815176
 ] 

Hadoop QA commented on HBASE-9818:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612405/9818-v5.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7756//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7756//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7756//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7756//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7756//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7756//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7756//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7756//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7756//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7756//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7756//console

This message is automatically generated.

> NPE in HFileBlock#AbstractFSReader#readAtOffset
> ---
>
> Key: HBASE-9818
> URL: https://issues.apache.org/jira/browse/HBASE-9818
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Ted Yu
> Attachments: 9818-v2.txt, 9818-v3.txt, 9818-v4.txt, 9818-v5.txt
>
>
> HFileBlock#istream seems to be null.  I was wondering should we hide 
> FSDataInputStreamWrapper#useHBaseChecksum.
> By the way, this happened when online schema change is enabled (encoding)
> {noformat}
> 2013-10-22 10:58:43,321 ERROR [RpcServer.handler=28,port=36020] 
> regionserver.HRegionServer:
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1200)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1436)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:503)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:553)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:245)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFil

[jira] [Commented] (HBASE-9879) Can't undelete a KeyValue

2013-11-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815179#comment-13815179
 ] 

Sergey Shelukhin commented on HBASE-9879:
-

See HBASE-8770. This would not be an issue if deletes and puts were resolved 
consistently...

> Can't undelete a KeyValue
> -
>
> Key: HBASE-9879
> URL: https://issues.apache.org/jira/browse/HBASE-9879
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.0
>Reporter: Benoit Sigoure
>
> Test scenario:
> put(KV, timestamp=100)
> put(KV, timestamp=200)
> delete(KV, timestamp=200, with MutationProto.DeleteType.DELETE_ONE_VERSION)
> get(KV) => returns value at timestamp=100 (OK)
> put(KV, timestamp=200)
> get(KV) => returns value at timestamp=100 (but not the one at timestamp=200 
> that was "reborn" by the previous put)
> Is that normal?
> I ran into this bug while running the integration tests at 
> https://github.com/OpenTSDB/asynchbase/pull/60 – the first time you run it, 
> it passes, but after that, it keeps failing.  Sorry I don't have the 
> corresponding HTable-based code but that should be fairly easy to write.
> I only tested this with 0.96.0, dunno yet how this behaved in prior releases.
> My hunch is that the tombstone added by the DELETE_ONE_VERSION keeps 
> shadowing the value even after it's reborn.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-7662) [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags

2013-11-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7662:
--

Status: Open  (was: Patch Available)

> [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags
> ---
>
> Key: HBASE-7662
> URL: https://issues.apache.org/jira/browse/HBASE-7662
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 7662.patch, 7662.patch, 7662.patch
>
>
> We can improve the performance of per-cell authorization if the read of the 
> cell ACL, if any, is combined with the sequential read of the cell data 
> already in progress. When tags are inlined with KVs in block encoding (see 
> HBASE-7448, and more generally HBASE-7233), we can use them to carry cell 
> ACLs instead of using out-of-line storage (HBASE-7661) for that purpose.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-7662) [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags

2013-11-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7662:
--

Attachment: 7662.patch

> [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags
> ---
>
> Key: HBASE-7662
> URL: https://issues.apache.org/jira/browse/HBASE-7662
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 7662.patch, 7662.patch, 7662.patch
>
>
> We can improve the performance of per-cell authorization if the read of the 
> cell ACL, if any, is combined with the sequential read of the cell data 
> already in progress. When tags are inlined with KVs in block encoding (see 
> HBASE-7448, and more generally HBASE-7233), we can use them to carry cell 
> ACLs instead of using out-of-line storage (HBASE-7661) for that purpose.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-7662) [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags

2013-11-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7662:
--

Attachment: (was: 7662.patch)

> [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags
> ---
>
> Key: HBASE-7662
> URL: https://issues.apache.org/jira/browse/HBASE-7662
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 7662.patch, 7662.patch, 7662.patch
>
>
> We can improve the performance of per-cell authorization if the read of the 
> cell ACL, if any, is combined with the sequential read of the cell data 
> already in progress. When tags are inlined with KVs in block encoding (see 
> HBASE-7448, and more generally HBASE-7233), we can use them to carry cell 
> ACLs instead of using out-of-line storage (HBASE-7661) for that purpose.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-7662) [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags

2013-11-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7662:
--

Attachment: (was: 7662.patch)

> [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags
> ---
>
> Key: HBASE-7662
> URL: https://issues.apache.org/jira/browse/HBASE-7662
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 7662.patch, 7662.patch, 7662.patch
>
>
> We can improve the performance of per-cell authorization if the read of the 
> cell ACL, if any, is combined with the sequential read of the cell data 
> already in progress. When tags are inlined with KVs in block encoding (see 
> HBASE-7448, and more generally HBASE-7233), we can use them to carry cell 
> ACLs instead of using out-of-line storage (HBASE-7661) for that purpose.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-7662) [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags

2013-11-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7662:
--

Status: Patch Available  (was: Open)

Fix unit test failure in TestAccessControlFilter, refactor a bit in 
AccessController, clean up some warnings in TestAccessControlFilter. Resubmit.

> [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags
> ---
>
> Key: HBASE-7662
> URL: https://issues.apache.org/jira/browse/HBASE-7662
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 7662.patch, 7662.patch, 7662.patch
>
>
> We can improve the performance of per-cell authorization if the read of the 
> cell ACL, if any, is combined with the sequential read of the cell data 
> already in progress. When tags are inlined with KVs in block encoding (see 
> HBASE-7448, and more generally HBASE-7233), we can use them to carry cell 
> ACLs instead of using out-of-line storage (HBASE-7661) for that purpose.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9818) NPE in HFileBlock#AbstractFSReader#readAtOffset

2013-11-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815184#comment-13815184
 ] 

Sergey Shelukhin commented on HBASE-9818:
-

{code}
+return useHBaseChecksum ?
+new Pair(this.streamNoFsChecksum, 
useHBaseChecksum) : 
+  new Pair(this.stream, useHBaseChecksum);
{code}
the boolean should be saved to local variable before checking, the value could 
change and result in no checksum (or both) being used.
Otherwise +1 from me... hopefully someone else can also take a look.
I am not 100% sure if this is complete fix or just narrows the window quite a 
bit

> NPE in HFileBlock#AbstractFSReader#readAtOffset
> ---
>
> Key: HBASE-9818
> URL: https://issues.apache.org/jira/browse/HBASE-9818
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Ted Yu
> Attachments: 9818-v2.txt, 9818-v3.txt, 9818-v4.txt, 9818-v5.txt
>
>
> HFileBlock#istream seems to be null.  I was wondering should we hide 
> FSDataInputStreamWrapper#useHBaseChecksum.
> By the way, this happened when online schema change is enabled (encoding)
> {noformat}
> 2013-10-22 10:58:43,321 ERROR [RpcServer.handler=28,port=36020] 
> regionserver.HRegionServer:
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1200)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1436)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:503)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:553)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:245)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:166)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:361)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:336)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:258)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:603)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:476)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:129)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3616)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3485)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3079)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:724)
> 2013-10-22 10:58:43,665 ERROR [RpcServer.handler=23,port=36020] 
> regionserver.HRegionServer:
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 53438 But the nextCallSeq got from client: 53437; 
> request=scanner_id: 1252577470624375060 number_of_rows: 100 close_scanner: 
> false next_call_seq: 53437
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3030)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
> 

[jira] [Commented] (HBASE-9792) Region states should update last assignments when a region is opened.

2013-11-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815197#comment-13815197
 ] 

Sergey Shelukhin commented on HBASE-9792:
-

+1

> Region states should update last assignments when a region is opened.
> -
>
> Key: HBASE-9792
> URL: https://issues.apache.org/jira/browse/HBASE-9792
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: trunk-9792.patch, trunk-9792_v2.patch, 
> trunk-9792_v3.1.patch, trunk-9792_v3.patch
>
>
> Currently, we update a region's last assignment region server when the region 
> is online.  We should do this sooner, when the region is moved to OPEN state. 
>  CM could kill this region server before we delete the znode and online the 
> region.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8770) deletes and puts with the same ts should be resolved according to mvcc/seqNum

2013-11-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815195#comment-13815195
 ] 

stack commented on HBASE-8770:
--

I believe that the Delete always sorting before the Put was an arbitrary choice 
way-back-when.  Yes, type should not be a factor and rather it should be 
sequenceid.  But then we'd need to have a sequenceid in the key so that 
everywhere order was respected?

> deletes and puts with the same ts should be resolved according to mvcc/seqNum
> -
>
> Key: HBASE-8770
> URL: https://issues.apache.org/jira/browse/HBASE-8770
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Sergey Shelukhin
>
> This came up during HBASE-8721. Puts with the same ts are resolved by seqNum. 
> It's not clear why deletes with the same ts as a put should always mask the 
> put, rather than also being resolve by seqNum.
> What do you think?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9818) NPE in HFileBlock#AbstractFSReader#readAtOffset

2013-11-06 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815204#comment-13815204
 ] 

Jimmy Xiang commented on HBASE-9818:


I am ok with the change.  But I don't think it fixes the issue.  If the stream 
is really closed, instead of NPE, we will get some IOException instead.

> NPE in HFileBlock#AbstractFSReader#readAtOffset
> ---
>
> Key: HBASE-9818
> URL: https://issues.apache.org/jira/browse/HBASE-9818
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Ted Yu
> Attachments: 9818-v2.txt, 9818-v3.txt, 9818-v4.txt, 9818-v5.txt
>
>
> HFileBlock#istream seems to be null.  I was wondering should we hide 
> FSDataInputStreamWrapper#useHBaseChecksum.
> By the way, this happened when online schema change is enabled (encoding)
> {noformat}
> 2013-10-22 10:58:43,321 ERROR [RpcServer.handler=28,port=36020] 
> regionserver.HRegionServer:
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1200)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1436)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:503)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:553)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:245)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:166)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:361)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:336)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:258)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:603)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:476)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:129)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3616)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3485)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3079)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:724)
> 2013-10-22 10:58:43,665 ERROR [RpcServer.handler=23,port=36020] 
> regionserver.HRegionServer:
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 53438 But the nextCallSeq got from client: 53437; 
> request=scanner_id: 1252577470624375060 number_of_rows: 100 close_scanner: 
> false next_call_seq: 53437
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3030)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.

[jira] [Commented] (HBASE-8770) deletes and puts with the same ts should be resolved according to mvcc/seqNum

2013-11-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815214#comment-13815214
 ] 

Sergey Shelukhin commented on HBASE-8770:
-

Between-file conflicts can be resolved using file seqids.
When we write one file (flush/compaction), we can write seqids only for 
conflicting keys (row+cf+q+ts). Should be relatively small load. We already 
have space for mvcc in KV and [~enis] purports to merge mvcc and seqId. Or we 
can store it in tags.

> deletes and puts with the same ts should be resolved according to mvcc/seqNum
> -
>
> Key: HBASE-8770
> URL: https://issues.apache.org/jira/browse/HBASE-8770
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Sergey Shelukhin
>
> This came up during HBASE-8721. Puts with the same ts are resolved by seqNum. 
> It's not clear why deletes with the same ts as a put should always mask the 
> put, rather than also being resolve by seqNum.
> What do you think?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9605) Allow AggregationClient to skip specifying column family for row count aggregate

2013-11-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9605:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Separate JIRA was opened for backport.

> Allow AggregationClient to skip specifying column family for row count 
> aggregate
> 
>
> Key: HBASE-9605
> URL: https://issues.apache.org/jira/browse/HBASE-9605
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 0605-0.94.patch, 9605-v1.txt
>
>
> For rowcounter job, column family is not required as input parameter.
> AggregationClient requires the specification of one column family:
> {code}
> } else if (scan.getFamilyMap().size() != 1) {
>   throw new IOException("There must be only one family.");
> }
> {code}
> We should relax the above requirement for row count aggregate where 
> FirstKeyOnlyFilter would be automatically applied.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9837) Forward port HBASE-9080 'Retain assignment should be used when re-enabling table(s)'

2013-11-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9837:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Forward port HBASE-9080 'Retain assignment should be used when re-enabling 
> table(s)'
> 
>
> Key: HBASE-9837
> URL: https://issues.apache.org/jira/browse/HBASE-9837
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9837-v1.txt
>
>
> HBASE-6143 still has some non-trivial work to do (according to Elliott).
> This issue is about forward porting HBASE-9080 'Retain assignment should be 
> used when re-enabling table(s)' to 0.96 and trunk.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9814) TestRegionServerCoprocessorExceptionWithRemove mentions master in javadoc

2013-11-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9814:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> TestRegionServerCoprocessorExceptionWithRemove mentions master in javadoc
> -
>
> Key: HBASE-9814
> URL: https://issues.apache.org/jira/browse/HBASE-9814
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.98.0
>
> Attachments: 9814.txt
>
>
> From TestRegionServerCoprocessorExceptionWithRemove :
> {code}
>  * Expected result is that the master will remove the buggy coprocessor from
> {code}
> Looks like a copy-and-paste error.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9671) CompactRandomRegionOfTableAction should check whether table is enabled

2013-11-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815224#comment-13815224
 ] 

Ted Yu commented on HBASE-9671:
---

If there is no consensus, I can close this JIRA.

> CompactRandomRegionOfTableAction should check whether table is enabled
> --
>
> Key: HBASE-9671
> URL: https://issues.apache.org/jira/browse/HBASE-9671
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 9671-v1.txt
>
>
> In our integration test we saw the following:
> {code}
> 2013-09-26 19:29:47,852|beaver.machine|INFO|2013-09-26 19:29:47,852 INFO  
> [main] client.HBaseAdmin: Started disable of IntegrationTestLoadAndVerify
> ...
> 2013-09-26 19:30:03,459|beaver.machine|INFO|2013-09-26 19:30:03,458 DEBUG 
> [Thread-6] actions.Action: Compacting region 
> IntegrationTestLoadAndVerify,\x8B\xC8\x06\x00\x00\x00\x00\x00/31_0,1380220935462.da93e4f26dbb801b0da03ffc70b6145d.
> ...
> 2013-09-26 19:30:03,500|beaver.machine|INFO|2013-09-26 19:30:03,500 WARN  
> [Thread-6] policies.Policy: Exception occured during performing action: 
> org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region is not online: 
> da93e4f26dbb801b0da03ffc70b6145d
> 2013-09-26 19:30:03,500|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2576)
> 2013-09-26 19:30:03,501|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3961)
> 2013-09-26 19:30:03,501|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.compactRegion(HRegionServer.java:3776)
> 2013-09-26 19:30:03,501|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:19803)
> 2013-09-26 19:30:03,502|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146)
> 2013-09-26 19:30:03,502|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851)
> 2013-09-26 19:30:03,502|beaver.machine|INFO|
> 2013-09-26 19:30:03,502|beaver.machine|INFO|at 
> sun.reflect.GeneratedConstructorAccessor24.newInstance(Unknown Source)
> 2013-09-26 19:30:03,503|beaver.machine|INFO|at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> 2013-09-26 19:30:03,503|beaver.machine|INFO|at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> 2013-09-26 19:30:03,503|beaver.machine|INFO|at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> 2013-09-26 19:30:03,503|beaver.machine|INFO|at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> 2013-09-26 19:30:03,503|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:235)
> 2013-09-26 19:30:03,504|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.client.HBaseAdmin.compact(HBaseAdmin.java:1638)
> 2013-09-26 19:30:03,504|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.client.HBaseAdmin.compact(HBaseAdmin.java:1602)
> 2013-09-26 19:30:03,504|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.client.HBaseAdmin.compact(HBaseAdmin.java:1495)
> 2013-09-26 19:30:03,504|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.chaos.actions.CompactRandomRegionOfTableAction.perform(CompactRandomRegionOfTableAction.java:69)
> 2013-09-26 19:30:03,504|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59)
> {code}
> CompactRandomRegionOfTableAction didn't check that table 
> IntegrationTestLoadAndVerify was enabled before issuing compaction request.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9775) Client write path perf issues

2013-11-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815225#comment-13815225
 ] 

stack commented on HBASE-9775:
--

Skimmed patch +1


> Client write path perf issues
> -
>
> Key: HBASE-9775
> URL: https://issues.apache.org/jira/browse/HBASE-9775
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Priority: Critical
> Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, 
> Charts Search   Cloudera Manager - ITBLL.png, Charts Search   Cloudera 
> Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, 
> ycsb_insert_94_vs_96.png
>
>
> Testing on larger clusters has not had the desired throughput increases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HBASE-9880) client.TestAsyncProcess.testWithNoClearOnFail broke on 0.96 by HBASE-9867

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon resolved HBASE-9880.


   Resolution: Fixed
Fix Version/s: 0.96.1
 Hadoop Flags: Reviewed

> client.TestAsyncProcess.testWithNoClearOnFail broke on 0.96 by HBASE-9867 
> --
>
> Key: HBASE-9880
> URL: https://issues.apache.org/jira/browse/HBASE-9880
> Project: HBase
>  Issue Type: Test
>Reporter: stack
>Assignee: Nicolas Liochon
> Fix For: 0.96.1
>
> Attachments: 9880.v1.patch
>
>
> It looks like the backport of HBASE-9867 broke 0.96 build (fine on trunk).  
> This was my patch.  Let me fix.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9869) Optimize HConnectionManager#getCachedLocation

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815227#comment-13815227
 ] 

Nicolas Liochon commented on HBASE-9869:


On this one, from what I saw the measure was real, but it was also because we 
were creating too many objects. Now that we're in better shape, it should be 
less visible.

This said, I feel that using a simple implementation without any weak/soft 
reference would be more efficient. That's what AsyncHBase is doing for 
example...

> Optimize HConnectionManager#getCachedLocation
> -
>
> Key: HBASE-9869
> URL: https://issues.apache.org/jira/browse/HBASE-9869
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
>
> It javadoc says: "TODO: This method during writing consumes 15% of CPU doing 
> lookup". This is still true, says Yourkit. With 0.96, we also spend more time 
> in these methods. We retry more, and the AsyncProcess calls it in parallel.
> I don't have the patch for this yet, but I will spend some time on it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Issue Comment Deleted] (HBASE-9775) Client write path perf issues

2013-11-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9775:
-

Comment: was deleted

(was: Skimmed patch +1
)

> Client write path perf issues
> -
>
> Key: HBASE-9775
> URL: https://issues.apache.org/jira/browse/HBASE-9775
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Priority: Critical
> Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, 
> Charts Search   Cloudera Manager - ITBLL.png, Charts Search   Cloudera 
> Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, 
> ycsb_insert_94_vs_96.png
>
>
> Testing on larger clusters has not had the desired throughput increases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9886) Optimize ServerName#compareTo

2013-11-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9886:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Optimize ServerName#compareTo
> -
>
> Key: HBASE-9886
> URL: https://issues.apache.org/jira/browse/HBASE-9886
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9886.v1.patch
>
>
> It shows up in the profiling...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9874) Append and Increment operation drops Tags

2013-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815231#comment-13815231
 ] 

Hudson commented on HBASE-9874:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #828 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/828/])
HBASE-9874 Append and Increment operation drops Tags (anoopsamjohn: rev 1539224)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java


> Append and Increment operation drops Tags
> -
>
> Key: HBASE-9874
> URL: https://issues.apache.org/jira/browse/HBASE-9874
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.98.0
>
> Attachments: AccessController.postMutationBeforeWAL.txt, 
> HBASE-9874.patch, HBASE-9874_V2.patch, HBASE-9874_V3.patch
>
>
> We should consider tags in the existing cells as well as tags coming in the 
> cells within Increment/Append



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8541) implement flush-into-stripes in stripe compactions

2013-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815232#comment-13815232
 ] 

Hudson commented on HBASE-8541:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #828 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/828/])
HBASE-8541 implement flush-into-stripes in stripe compactions (sershe: rev 
1539211)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFileManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFlusher.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFlusher.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeMultiFileWriter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreConfig.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreEngine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFileManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFlusher.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactionPolicy.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeCompactor.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreFileManager.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestStripeCompactionPolicy.java


> implement flush-into-stripes in stripe compactions
> --
>
> Key: HBASE-8541
> URL: https://issues.apache.org/jira/browse/HBASE-8541
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-8541-latest-with-dependencies.patch, 
> HBASE-8541-latest-with-dependencies.patch, 
> HBASE-8541-latest-with-dependencies.patch, 
> HBASE-8541-latest-with-dependencies.patch, HBASE-8541-v0.patch, 
> HBASE-8541-v1.patch, HBASE-8541-v2.patch, HBASE-8541-v3.patch, 
> HBASE-8541-v4.patch, HBASE-8541-v5.patch
>
>
> Flush will be able to flush into multiple files under this design, avoiding 
> L0 I/O amplification.
> I have the patch which is missing just one feature - support for concurrent 
> flushes and stripe changes. This can be done via extensive try-locking of 
> stripe changes and flushes, or advisory flags without blocking flushes, 
> dumping conflicting flushes into L0 in case of (very rare) collisions. For 
> file loading for the latter, a set-cover-like problem needs to be solved to 
> determine optimal stripes. That will also address Jimmy's concern of getting 
> rid of metadata, btw. However currently I don't have time for that. I plan to 
> attach the try-locking patch first, but this won't happen for a couple weeks 
> probably and should not block main reviews. Hopefully this will be added on 
> top of main reviews.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9775) Client write path perf issues

2013-11-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815284#comment-13815284
 ] 

Nicolas Liochon commented on HBASE-9775:


Seems ok. The patch touches many

> Client write path perf issues
> -
>
> Key: HBASE-9775
> URL: https://issues.apache.org/jira/browse/HBASE-9775
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Priority: Critical
> Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, 
> Charts Search   Cloudera Manager - ITBLL.png, Charts Search   Cloudera 
> Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, 
> ycsb_insert_94_vs_96.png
>
>
> Testing on larger clusters has not had the desired throughput increases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   3   >