[jira] [Updated] (HBASE-12223) MultiTableInputFormatBase.getSplits is too slow

2014-12-16 Thread YuanBo Peng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YuanBo Peng updated HBASE-12223:

Attachment: HBASE-12223-v1.patch

 MultiTableInputFormatBase.getSplits is too slow
 ---

 Key: HBASE-12223
 URL: https://issues.apache.org/jira/browse/HBASE-12223
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.94.15
Reporter: shanwen
Assignee: YuanBo Peng
Priority: Minor
 Fix For: 1.0.0, 2.0.0, 0.98.10, 0.94.27

 Attachments: HBASE-12223-v1.patch, HBASE-12223.patch


 when use Multiple scan,getSplits is too slow,800 scans take five minutes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12688) Update site with a bootstrap-based UI

2014-12-16 Thread yohan koehler (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247960#comment-14247960
 ] 

yohan koehler commented on HBASE-12688:
---

Hi, I've been looking for reflow too and it offers lot's of useful features, 
theme switching, built-in components like carousel or sidebars, better choice I 
think.

 Update site with a bootstrap-based UI
 -

 Key: HBASE-12688
 URL: https://issues.apache.org/jira/browse/HBASE-12688
 Project: HBase
  Issue Type: Bug
  Components: site
Affects Versions: 2.0.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0

 Attachments: HBASE-12688.00-reflow.patch, HBASE-12688.00.patch


 Looks like we can upgrade our look pretty cheaply by just swapping to a 
 different skin. This fluido-skin uses Twitter Bootstrap. It's an older 2.x 
 version (upstream has moved onto 3.x), but it's a start. There's some 
 out-of-the-box configuration choices regarding menu bar location. We can also 
 look into some of our own custom CSS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception throw when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247988#comment-14247988
 ] 

Hadoop QA commented on HBASE-12699:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12687442/HBASE-12699.v1.master.patch
  against master branch at commit 96c6b9815ddbc9f2589655df4ad2381af04ac9f8.
  ATTACHMENT ID: 12687442

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+# DEFERRED_LOG_FLUSH is deprecated and was replaced by DURABILITY. 
 To keep backward compatible, it still exists.
+# However, it has to set before DURABILITY so that DURABILITY could 
overwrite if both args are set
+# DEFERRED_LOG_FLUSH is deprecated and was replaced by DURABILITY.  To 
keep backward compatible, it still exists.
+# However, it has to set before DURABILITY so that DURABILITY could 
overwrite if both args are set

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.mahout.clustering.fuzzykmeans.TestFuzzyKmeansClustering.testFuzzyKMeansMRJob(TestFuzzyKmeansClustering.java:195)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1558)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:736)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:786)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:745)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:647)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:681)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:692)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:787)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:385)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:555)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(RandomizedRunner.java:491)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:501)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.run(RandomizedRunner.java:399)
at 
org.apache.mahout.clustering.streaming.cluster.BallKMeansTest.testClusteringMultipleRuns(BallKMeansTest.java:79)

Test results: 

[jira] [Commented] (HBASE-12673) Add a UT to read mob file when the mob hfile moving from the mob dir to the archive dir

2014-12-16 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247995#comment-14247995
 ] 

Jiajia Li commented on HBASE-12673:
---

hi, [~j...@cloudera.com], as you know when hbase reads mob cell, it has two 
steps.
# Read the ref cell from the HBase, and get the cell value which is the mob 
file name.
# HBase has two possible locations to read the mob cell, one is the 
mobWorkingDir/fileName, archiveDir/fileName. When the mob file is not in the 
mobWorkingDir, HBase will try the second location.But now we only retry after 
the 
FileNotFoundException(https://github.com/apache/hbase/blob/hbase-11339/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HMobStore.java#L312)
Do you means a table deletion on the original table happens in the middle of 
the read operation will throw other IOExceptions? 
I don't know the HFileLink how to guarantee the case in the MOB? Can you please 
give a more detailed description? Thanks

 Add a UT to read mob file when the mob hfile moving from the mob dir to the 
 archive dir
 ---

 Key: HBASE-12673
 URL: https://issues.apache.org/jira/browse/HBASE-12673
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jiajia Li
Assignee: Jiajia Li
 Fix For: hbase-11339

 Attachments: HBASE-12673.patch


 add a unit test to scan the cloned table when deleting the original table, 
 and the steps as following:
 1) create a table with mobs, 
 2) snapshot it, 
 3) clone it as a a different table
 4) have a read workload on the snapshot
 5) delete the original table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12636) Avoid too many write operations on zookeeper in replication

2014-12-16 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248100#comment-14248100
 ] 

Liu Shaohui commented on HBASE-12636:
-

[~lhofhansl] [~stack]
Repeated replication data will happen in many cases in current codebase.
- The network problem which make master cluster did not get response of 
replication but the replication data has been written into in slave cluster.
- A moving region in salve cluster which make the replication failed, but part 
of replication data has been into in slave cluster.

This patch just make repeated replication data more frequently.

We may open another issue to check if the replication operation is idempotent?

 Avoid too many write operations on zookeeper in replication
 ---

 Key: HBASE-12636
 URL: https://issues.apache.org/jira/browse/HBASE-12636
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Assignee: Liu Shaohui
  Labels: replication
 Fix For: 1.0.0

 Attachments: HBASE-12635-v2.diff, HBASE-12636-v1.diff


 In our production cluster, we found there are about over 1k write operations 
 per second on zookeeper from hbase replication. The reason is that the 
 replication source will write the log position to zookeeper for every edit 
 shipping. If the current replicating WAL is just the WAL that regionserver is 
 writing to,  each skipping will be very small but the frequency is very high, 
 which causes many write operations on zookeeper.
 A simple solution is that writing log position to zookeeper when position 
 diff or skipped edit number is larger than a threshold, not every  edit 
 shipping.
 Suggestions are welcomed, thx~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12223) MultiTableInputFormatBase.getSplits is too slow

2014-12-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12223:
---
Status: Patch Available  (was: Open)

 MultiTableInputFormatBase.getSplits is too slow
 ---

 Key: HBASE-12223
 URL: https://issues.apache.org/jira/browse/HBASE-12223
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.94.15
Reporter: shanwen
Assignee: YuanBo Peng
Priority: Minor
 Fix For: 1.0.0, 2.0.0, 0.98.10, 0.94.27

 Attachments: HBASE-12223-v1.patch, HBASE-12223.patch


 when use Multiple scan,getSplits is too slow,800 scans take five minutes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12223) MultiTableInputFormatBase.getSplits is too slow

2014-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248155#comment-14248155
 ] 

Hadoop QA commented on HBASE-12223:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12687450/HBASE-12223-v1.patch
  against master branch at commit 96c6b9815ddbc9f2589655df4ad2381af04ac9f8.
  ATTACHMENT ID: 12687450

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12095//console

This message is automatically generated.

 MultiTableInputFormatBase.getSplits is too slow
 ---

 Key: HBASE-12223
 URL: https://issues.apache.org/jira/browse/HBASE-12223
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.94.15
Reporter: shanwen
Assignee: YuanBo Peng
Priority: Minor
 Fix For: 1.0.0, 2.0.0, 0.98.10, 0.94.27

 Attachments: HBASE-12223-v1.patch, HBASE-12223.patch


 when use Multiple scan,getSplits is too slow,800 scans take five minutes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12641) Grant all permissions of hbase zookeeper node to hbase superuser in a secure cluster

2014-12-16 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248192#comment-14248192
 ] 

Liu Shaohui commented on HBASE-12641:
-

[~apurtell]
{quote}
Why the 'if (!node.startsWith(zkw.baseZNode))' shortcut?
{quote}
See HBASE-7258: HBase will create the baseZNode recursively if the parent node 
does not exist.
if zookeeper.znode.parent is /service/hbase/, we don't want set acl on node 
/service when hbase creates this node.
So we add this shortcut.


 Grant all permissions of hbase zookeeper node to hbase superuser in a secure 
 cluster
 

 Key: HBASE-12641
 URL: https://issues.apache.org/jira/browse/HBASE-12641
 Project: HBase
  Issue Type: Improvement
  Components: Zookeeper
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-12641-v1.diff


 Currently in a secure cluster, only the master/regionserver kerberos user can 
 manage the znode of hbase. But he master/regionserver kerberos user is for 
 rpc connection and we usually use another super user to manage the cluster.
 In some special scenarios, we need to manage the data of znode with the 
 supper user.
 eg: 
 a, To get the data of the znode for debugging.
 b, HBASE-8253: We need to delete the znode for the corrupted hlog to avoid it 
 block the replication.
 So we grant all permissions of hbase zookeeper node to hbase superuser during 
 creating these znodes.
 Suggestions are welcomed.
 [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12590) A solution for data skew in HBase-Mapreduce Job

2014-12-16 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12590:
---
Attachment: HBASE-12590-v3.patch

[~j...@cloudera.com]
Hi, would you please take a look at this new patch?

in the new patch:
1, re-design the function for getting split point in large region
2, add a new mode for binary keys. The default mode is for text keys. User can 
swith by setting a new configuration: hbase.table.row.textkey
3, add new tests for both text keys and binary keys 

 A solution for data skew in HBase-Mapreduce Job
 ---

 Key: HBASE-12590
 URL: https://issues.apache.org/jira/browse/HBASE-12590
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Weichen Ye
 Attachments: A Solution for Data Skew in HBase-MapReduce Job 
 (Version2).pdf, HBASE-12590-v3.patch, HBase-12590-v1.patch, 
 HBase-12590-v2.patch


 1, Motivation
 In production environment, data skew is a very common case. A HBase table 
 always contains a lot of small regions and several large regions. Small 
 regions waste a lot of computing resources. If we use a job to scan a table 
 with 3000 small regions, we need a job with 3000 mappers. Large regions 
 always block the job. If in a 100-region table, one region is far larger then 
 the other 99 regions. When we run a job with the table as input, 99 mappers 
 will be completed very quickly, and we need to wait for the last mapper for a 
 long time.
 2, Configuration
 Add two new configuration. 
 hbase.mapreduce.split.autobalance = true means enabling the “auto balance” in 
 HBase-MapReduce jobs. The default value is false. 
 hbase.mapreduce.split.targetsize = 1073741824 (default 1GB). The target size 
 of mapreduce splits. 
 If a region size is large than the target size, cut the region into two 
 split.If the sum of several small continuous region size less than the target 
 size, combine these regions into one split.
 Example:
 In attachment
 Welcome to the Review Board.
 https://reviews.apache.org/r/28494/diff/#



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12590) A solution for data skew in HBase-Mapreduce Job

2014-12-16 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12590:
---
Attachment: A Solution for Data Skew in HBase-MapReduce Job (Version3).pdf

 A solution for data skew in HBase-Mapreduce Job
 ---

 Key: HBASE-12590
 URL: https://issues.apache.org/jira/browse/HBASE-12590
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Weichen Ye
 Attachments: A Solution for Data Skew in HBase-MapReduce Job 
 (Version2).pdf, A Solution for Data Skew in HBase-MapReduce Job 
 (Version3).pdf, HBASE-12590-v3.patch, HBase-12590-v1.patch, 
 HBase-12590-v2.patch


 1, Motivation
 In production environment, data skew is a very common case. A HBase table 
 always contains a lot of small regions and several large regions. Small 
 regions waste a lot of computing resources. If we use a job to scan a table 
 with 3000 small regions, we need a job with 3000 mappers. Large regions 
 always block the job. If in a 100-region table, one region is far larger then 
 the other 99 regions. When we run a job with the table as input, 99 mappers 
 will be completed very quickly, and we need to wait for the last mapper for a 
 long time.
 2, Configuration
 Add two new configuration. 
 hbase.mapreduce.split.autobalance = true means enabling the “auto balance” in 
 HBase-MapReduce jobs. The default value is false. 
 hbase.mapreduce.split.targetsize = 1073741824 (default 1GB). The target size 
 of mapreduce splits. 
 If a region size is large than the target size, cut the region into two 
 split.If the sum of several small continuous region size less than the target 
 size, combine these regions into one split.
 Example:
 In attachment
 Welcome to the Review Board.
 https://reviews.apache.org/r/28494/diff/#



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12590) A solution for data skew in HBase-Mapreduce Job

2014-12-16 Thread Weichen Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248199#comment-14248199
 ] 

Weichen Ye commented on HBASE-12590:


Latest diff on review board: https://reviews.apache.org/r/28494/diff/


 A solution for data skew in HBase-Mapreduce Job
 ---

 Key: HBASE-12590
 URL: https://issues.apache.org/jira/browse/HBASE-12590
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Weichen Ye
 Attachments: A Solution for Data Skew in HBase-MapReduce Job 
 (Version2).pdf, A Solution for Data Skew in HBase-MapReduce Job 
 (Version3).pdf, HBASE-12590-v3.patch, HBase-12590-v1.patch, 
 HBase-12590-v2.patch


 1, Motivation
 In production environment, data skew is a very common case. A HBase table 
 always contains a lot of small regions and several large regions. Small 
 regions waste a lot of computing resources. If we use a job to scan a table 
 with 3000 small regions, we need a job with 3000 mappers. Large regions 
 always block the job. If in a 100-region table, one region is far larger then 
 the other 99 regions. When we run a job with the table as input, 99 mappers 
 will be completed very quickly, and we need to wait for the last mapper for a 
 long time.
 2, Configuration
 Add two new configuration. 
 hbase.mapreduce.split.autobalance = true means enabling the “auto balance” in 
 HBase-MapReduce jobs. The default value is false. 
 hbase.mapreduce.split.targetsize = 1073741824 (default 1GB). The target size 
 of mapreduce splits. 
 If a region size is large than the target size, cut the region into two 
 split.If the sum of several small continuous region size less than the target 
 size, combine these regions into one split.
 Example:
 In attachment
 Welcome to the Review Board.
 https://reviews.apache.org/r/28494/diff/#



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12590) A solution for data skew in HBase-Mapreduce Job

2014-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248276#comment-14248276
 ] 

Hadoop QA commented on HBASE-12590:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12687480/HBASE-12590-v3.patch
  against master branch at commit 96c6b9815ddbc9f2589655df4ad2381af04ac9f8.
  ATTACHMENT ID: 12687480

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
2091 checkstyle errors (more than the master's current 2089 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12096//console

This message is automatically generated.

 A solution for data skew in HBase-Mapreduce Job
 ---

 Key: HBASE-12590
 URL: https://issues.apache.org/jira/browse/HBASE-12590
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Weichen Ye
 Attachments: A Solution for Data Skew in HBase-MapReduce Job 
 (Version2).pdf, A Solution for Data Skew in HBase-MapReduce Job 
 (Version3).pdf, HBASE-12590-v3.patch, HBase-12590-v1.patch, 
 HBase-12590-v2.patch


 1, Motivation
 In production environment, data skew is a very common case. A HBase table 
 always contains a lot of small regions and several large regions. Small 
 regions waste a lot of computing resources. If we use a job to scan a table 
 with 3000 small regions, we need a job with 3000 mappers. Large regions 
 always block the job. If in a 100-region table, one region is far larger then 
 the other 99 regions. When we run a job with the table as input, 99 mappers 
 will be completed very quickly, and we need to wait for the last mapper for a 
 long time.
 2, Configuration
 Add two new configuration. 
 hbase.mapreduce.split.autobalance = true means enabling the “auto balance” in 
 HBase-MapReduce jobs. The default value is false. 
 hbase.mapreduce.split.targetsize = 1073741824 (default 1GB). The target 

[jira] [Commented] (HBASE-12673) Add a UT to read mob file when the mob hfile moving from the mob dir to the archive dir

2014-12-16 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248323#comment-14248323
 ] 

Jonathan Hsieh commented on HBASE-12673:


Thanks for the pointer in HMobStore @ 312.  The retry there should address the 
situation I was concerned about.  Looking at that code again one other concern 
comes up -- do you know if line 319 in there will throw another exception if we 
got the FNFE on line 313?  (we'd have an open file instance that got moved --- 
not sure what would happen on close on line 319).  Can we change it so that we 
capture the other exceptions that could be caught there [1] and then try the 
next location?

HFileLink essentially makes this file redirection mechanism transparent and 
would potentially make the code easier to follow.If you can convince me 
that the line 319 concerns isn't a problem with a unit tests I'll drop the 
HFileLink request.  As it stands it might be a little difficult to mock it out 
but would be helpful.

[1] 
https://github.com/apache/hbase/blob/hbase-11339/hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java#L124

 Add a UT to read mob file when the mob hfile moving from the mob dir to the 
 archive dir
 ---

 Key: HBASE-12673
 URL: https://issues.apache.org/jira/browse/HBASE-12673
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jiajia Li
Assignee: Jiajia Li
 Fix For: hbase-11339

 Attachments: HBASE-12673.patch


 add a unit test to scan the cloned table when deleting the original table, 
 and the steps as following:
 1) create a table with mobs, 
 2) snapshot it, 
 3) clone it as a a different table
 4) have a read workload on the snapshot
 5) delete the original table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12673) Add a UT to read mob file when the mob hfile moving from the mob dir to the archive dir

2014-12-16 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248323#comment-14248323
 ] 

Jonathan Hsieh edited comment on HBASE-12673 at 12/16/14 2:43 PM:
--

Thanks for the pointer in HMobStore @ 312.  The retry there should address the 
situation I was concerned about.  Looking at that code again one other concern 
comes up -- do you know if line 319 in there will throw another exception if we 
got the FNFE on line 313?  (we'd have an open file instance that got moved --- 
not sure what would happen on close on line 319).  Can we change it so that we 
capture the other exceptions that could be caught there [1] and then try the 
next location?

HFileLink essentially makes this file redirection mechanism transparent and 
would potentially make the code easier to follow.  I'd prefer it if we could 
use that code so if we find other cases we can just fix it in one centralized 
place. 

[1] 
https://github.com/apache/hbase/blob/hbase-11339/hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java#L124


was (Author: jmhsieh):
Thanks for the pointer in HMobStore @ 312.  The retry there should address the 
situation I was concerned about.  Looking at that code again one other concern 
comes up -- do you know if line 319 in there will throw another exception if we 
got the FNFE on line 313?  (we'd have an open file instance that got moved --- 
not sure what would happen on close on line 319).  Can we change it so that we 
capture the other exceptions that could be caught there [1] and then try the 
next location?

HFileLink essentially makes this file redirection mechanism transparent and 
would potentially make the code easier to follow.If you can convince me 
that the line 319 concerns isn't a problem with a unit tests I'll drop the 
HFileLink request.  As it stands it might be a little difficult to mock it out 
but would be helpful.

[1] 
https://github.com/apache/hbase/blob/hbase-11339/hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java#L124

 Add a UT to read mob file when the mob hfile moving from the mob dir to the 
 archive dir
 ---

 Key: HBASE-12673
 URL: https://issues.apache.org/jira/browse/HBASE-12673
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jiajia Li
Assignee: Jiajia Li
 Fix For: hbase-11339

 Attachments: HBASE-12673.patch


 add a unit test to scan the cloned table when deleting the original table, 
 and the steps as following:
 1) create a table with mobs, 
 2) snapshot it, 
 3) clone it as a a different table
 4) have a read workload on the snapshot
 5) delete the original table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12629) Remove hbase.regionsizecalculator.enable from RegionSizeCalculator

2014-12-16 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248376#comment-14248376
 ] 

Jonathan Hsieh commented on HBASE-12629:


[~enis] ping.

 Remove hbase.regionsizecalculator.enable from RegionSizeCalculator
 --

 Key: HBASE-12629
 URL: https://issues.apache.org/jira/browse/HBASE-12629
 Project: HBase
  Issue Type: Improvement
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 1.0.0, 2.0.0

 Attachments: 
 0001-HBASE-12629-Remove-hbase.regionsizecalculator.enable.patch


 The RegionSizeCalculator has a option to disable it.  It is on by default and 
 end-to-end use with it disabled is not tested or used anywhere except for a 
 simple unit test.  This removes it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception throw when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248391#comment-14248391
 ] 

stack commented on HBASE-12699:
---

+1 Failure not related.

 undefined method `setAsyncLogFlush' exception throw when setting 
 DEFERRED_LOG_FLUSH=true
 -

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Attachments: HBASE-12699.v1.master.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11861) Native MOB Compaction mechanisms.

2014-12-16 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248395#comment-14248395
 ] 

Jonathan Hsieh commented on HBASE-11861:


bq. This is why I insist to run the mob compaction in regions. If we do the mob 
compaction out of region or across regions, we have to locks the major 
compactions globally.

nice catch on that race condition -- I buy it.  This is essentially the same as 
with the MR sweeper approach right? 

So we'd need to guarantee that the compacted mob and the bulkload of the new 
references block a major compaction on the region that the ref bulk load is 
happening on.   This means no major compactions before step #2, but allowed 
after step #4.  

Let's spell out the costs of the different approaches. -- the del mob global 
scan for the mob compaction approach and the per region mob compaction. 

Meanwhile I noticed you file a new jira for counts and I filed one for the del 
mob generator.  We can get code started on those, and hash out this higher 
level design while doing so.

bq. I think we could leave the expired(live longer than TTL) cells out of the 
del files. Let the ExpiredMobFileCleaner to handle those mob files directly.

sounds reasonable.  We need to enforce the mob file time ordering though to 
make sure the mob compaction is effective.



 Native MOB Compaction mechanisms.
 -

 Key: HBASE-11861
 URL: https://issues.apache.org/jira/browse/HBASE-11861
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 2.0.0
Reporter: Jonathan Hsieh
 Attachments: 141030-mob-compaction.pdf, mob compaction.pdf


 Currently, the first cut of mob will have external processes to age off old 
 mob data (the ttl cleaner), and to compact away deleted or over written data 
 (the sweep tool).  
 From an operational point of view, having two external tools, especially one 
 that relies on MapReduce is undesirable.  In this issue we'll tackle 
 integrating these into hbase without requiring external processes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12699) undefined method `setAsyncLogFlush' exception throw when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12699:
---
Attachment: HBASE-12699.v1.branch-1.patch

 undefined method `setAsyncLogFlush' exception throw when setting 
 DEFERRED_LOG_FLUSH=true
 -

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work logged] (HBASE-12699) undefined method `setAsyncLogFlush' exception throw when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12699?focusedWorklogId=18771page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-18771
 ]

Stephen Yuan Jiang logged work on HBASE-12699:
--

Author: Stephen Yuan Jiang
Created on: 16/Dec/14 17:28
Start Date: 16/Dec/14 17:27
Worklog Time Spent: 4h 

Issue Time Tracking
---

Worklog Id: (was: 18771)
Time Spent: 4h
Remaining Estimate: 1h  (was: 24h)

 undefined method `setAsyncLogFlush' exception throw when setting 
 DEFERRED_LOG_FLUSH=true
 -

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12699:
---
Summary: undefined method `setAsyncLogFlush' exception thrown when setting 
DEFERRED_LOG_FLUSH=true  (was: undefined method `setAsyncLogFlush' exception 
throw when setting DEFERRED_LOG_FLUSH=true)

 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12699:
---
   Resolution: Fixed
Fix Version/s: 2.0.0
   1.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Stephen.

Thanks for the review, Stack.

 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5699) Run with 1 WAL in HRegionServer

2014-12-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-5699:
---
Attachment: 
HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff

Attaching a plot that includes running the same tests with the delegate wals as 
DisabledWALProvider, named 
HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.

This should show the limit from context switching and such in the test itself. 
The DisabledWALProvider doesn't include any of the overhead from the ringbuffer 
or sync grouping.

There are very few data points for the new test cases, so I didn't include any 
stddev bars. I just used the average for hte whole run. All of them were so 
short that they probably didn't have time to get into steady state.

(The disabled wals are at the top, the previous runs with datanode writes are 
at the bottom)

 Run with  1 WAL in HRegionServer
 -

 Key: HBASE-5699
 URL: https://issues.apache.org/jira/browse/HBASE-5699
 Project: HBase
  Issue Type: Improvement
  Components: Performance, wal
Reporter: binlijin
Assignee: Sean Busbey
Priority: Critical
 Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, 
 HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, 
 HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff,
  HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, 
 HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, 
 HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, 
 HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, 
 HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, 
 hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, 
 hbase-5699_total_throughput_sync_heavy.txt, 
 results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, 
 results-updated-hbase5699-wals-2.txt.bz2, 
 results-updated-hbase5699-wals-4.txt.bz2, 
 results-updated-hbase5699-wals-6.txt.bz2






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12645) HBaseTestingUtility is using ${$HOME} for rootDir

2014-12-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248582#comment-14248582
 ] 

Varun Saxena commented on HBASE-12645:
--

[~stack], kindly review. The test case failure is unrelated and tests are 
passing in my local setup.

bq. With this in place, after the test suite completes, we are not writing to 
user homedir any more?
I removed all references of {{getHomeDir())) from code and nothing existed in 
home dir after test suite completed.
Moreover, I checked all the test logs to find if some directory was created 
inside home directory or not and could not find anything. So in all probability 
nothing should have been created in home dir.

bq. Also, the new flag is never doc'd. What is it supposed to do? (I'm not 
clear). Flag is createRootDirIfExists This means, create root dir if it exists? 
But we don't check existance when we use it. Should we?
Flag name has been changed. The flag basically meant whether to fetch a new 
root dir path if one already exists(has been fetched earlier for the test 
class). First a path is created and then directory is created. The directory is 
anyways newly created(overwritten if already there). 

bq. nit: Usually the following is written as if(!createIfExists) rather than as 
if (false == createIfExists) {
Made the necessary change.

bq. Would we not want this flag always set? Or some tests need it not set?
Some cases fail if we always set the flag.

 HBaseTestingUtility is using ${$HOME} for rootDir
 -

 Key: HBASE-12645
 URL: https://issues.apache.org/jira/browse/HBASE-12645
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 1.0.0
Reporter: Nick Dimiduk
Assignee: Varun Saxena
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12645.002.patch, HBASE-12645.003.patch, 
 HBASE-12645.004.patch, HBASE-12645.004.patch, HBASE-12645.005.patch, 
 HBASE-12645.006.patch, HBASE-12645.patch


 I noticed this while running tests on branch-1
 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.009 sec  
 FAILURE! - in 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits  Time 
 elapsed: 0.009 sec   ERROR!
 java.io.FileNotFoundException: Destination exists and is not a directory: 
 /homes/hortonnd/hbase
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:423)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:588)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1053)
 at 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits.setupBeforeClass(TestReadOldRootAndMetaEdits.java:70)
 {noformat}
 Either the testing utility has a regression or there's a config regression in 
 this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12645) HBaseTestingUtility is using ${$HOME} for rootDir

2014-12-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12645:
--
Attachment: HBASE-12645.006.patch

Retry

 HBaseTestingUtility is using ${$HOME} for rootDir
 -

 Key: HBASE-12645
 URL: https://issues.apache.org/jira/browse/HBASE-12645
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 1.0.0
Reporter: Nick Dimiduk
Assignee: Varun Saxena
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12645.002.patch, HBASE-12645.003.patch, 
 HBASE-12645.004.patch, HBASE-12645.004.patch, HBASE-12645.005.patch, 
 HBASE-12645.006.patch, HBASE-12645.006.patch, HBASE-12645.patch


 I noticed this while running tests on branch-1
 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.009 sec  
 FAILURE! - in 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits  Time 
 elapsed: 0.009 sec   ERROR!
 java.io.FileNotFoundException: Destination exists and is not a directory: 
 /homes/hortonnd/hbase
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:423)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:588)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1053)
 at 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits.setupBeforeClass(TestReadOldRootAndMetaEdits.java:70)
 {noformat}
 Either the testing utility has a regression or there's a config regression in 
 this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12645) HBaseTestingUtility is using ${$HOME} for rootDir

2014-12-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248637#comment-14248637
 ] 

stack commented on HBASE-12645:
---

Patch looks good.

I'm not sure I understand what this means:

Whether to get a new root or data dir path even if such a path has been 
fetched earlier is decided based on flag getNewRootDirPathIfExists

What is 'get a new root or data dir path' and what does 'fetched' mean here?  
Does it mean created (create with override)? (I think it means this latter -- 
one of your javadoc param notes says so... I'd think all mention of this flag 
in javadoc would have same text?)

Usually too the above comment is put on the javadoc param '@param 
getNewRootDirPathIfExists Whether to get a '  I see you do it sometimes but 
not always.

getNewRootDirPathIfExists is not a good name for a flag.  It is the name of a 
getter method for a flag named newRootDirPathIfExists

Otherwise, the patch is great and thank you for checking we no longer write the 
root dir. I ran the patch again against hadoopqa so we can see test failure is 
not related.  T hanks.

 HBaseTestingUtility is using ${$HOME} for rootDir
 -

 Key: HBASE-12645
 URL: https://issues.apache.org/jira/browse/HBASE-12645
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 1.0.0
Reporter: Nick Dimiduk
Assignee: Varun Saxena
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12645.002.patch, HBASE-12645.003.patch, 
 HBASE-12645.004.patch, HBASE-12645.004.patch, HBASE-12645.005.patch, 
 HBASE-12645.006.patch, HBASE-12645.006.patch, HBASE-12645.patch


 I noticed this while running tests on branch-1
 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.009 sec  
 FAILURE! - in 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits  Time 
 elapsed: 0.009 sec   ERROR!
 java.io.FileNotFoundException: Destination exists and is not a directory: 
 /homes/hortonnd/hbase
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:423)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:588)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1053)
 at 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits.setupBeforeClass(TestReadOldRootAndMetaEdits.java:70)
 {noformat}
 Either the testing utility has a regression or there's a config regression in 
 this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5699) Run with 1 WAL in HRegionServer

2014-12-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248660#comment-14248660
 ] 

stack commented on HBASE-5699:
--

Thanks [~busbey] Looks like writing datanodes puts a bit of friction on our 
write path.  I wonder how much the ringbuffer+grouping is costing us? Looking 
at the graph, you'd think adding extra WALs would make a bigger difference 
given the large gap between no-friction and one WAL. Good stuff.

 Run with  1 WAL in HRegionServer
 -

 Key: HBASE-5699
 URL: https://issues.apache.org/jira/browse/HBASE-5699
 Project: HBase
  Issue Type: Improvement
  Components: Performance, wal
Reporter: binlijin
Assignee: Sean Busbey
Priority: Critical
 Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, 
 HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, 
 HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff,
  HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, 
 HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, 
 HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, 
 HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, 
 HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, 
 hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, 
 hbase-5699_total_throughput_sync_heavy.txt, 
 results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, 
 results-updated-hbase5699-wals-2.txt.bz2, 
 results-updated-hbase5699-wals-4.txt.bz2, 
 results-updated-hbase5699-wals-6.txt.bz2






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12645) HBaseTestingUtility is using ${$HOME} for rootDir

2014-12-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248687#comment-14248687
 ] 

Varun Saxena commented on HBASE-12645:
--

bq. Whether to get a new root or data dir path even if such a path has been 
fetched earlier is decided based on flag getNewRootDirPathIfExists
bq. What is 'get a new root or data dir path' and what does 'fetched' mean 
here? Does it mean created (create with override)? (I think it means this 
latter – one of your javadoc param notes says so... I'd think all mention of 
this flag in javadoc would have same text?)
Well, the creation of root directory and fetching of path of root directory are 
2 distinct operations. {{createRootDir()}} fetches the path of root directory 
by calling {{getDefaultRootDirPath()}} and then creates it(overwrites if path 
exists). 
Whenever we fetch or get a root or data directory path we mark it(variable 
{{dataTestDirOnTestFS}} will be null if path hasnt been retrieved earlier). 
The flag {{getNewRootDirPathIfExists}} indicates whether this flow of getting 
path has been hit before or not by checking whether {{dataTestDirOnTestFS}} is 
null or not.
Flag name has been kept like this for the lack of a proper name. Didnt sound 
great even to me. Probably you can suggest a better name.


 HBaseTestingUtility is using ${$HOME} for rootDir
 -

 Key: HBASE-12645
 URL: https://issues.apache.org/jira/browse/HBASE-12645
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 1.0.0
Reporter: Nick Dimiduk
Assignee: Varun Saxena
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12645.002.patch, HBASE-12645.003.patch, 
 HBASE-12645.004.patch, HBASE-12645.004.patch, HBASE-12645.005.patch, 
 HBASE-12645.006.patch, HBASE-12645.006.patch, HBASE-12645.patch


 I noticed this while running tests on branch-1
 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.009 sec  
 FAILURE! - in 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits  Time 
 elapsed: 0.009 sec   ERROR!
 java.io.FileNotFoundException: Destination exists and is not a directory: 
 /homes/hortonnd/hbase
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:423)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:588)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1053)
 at 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits.setupBeforeClass(TestReadOldRootAndMetaEdits.java:70)
 {noformat}
 Either the testing utility has a regression or there's a config regression in 
 this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248686#comment-14248686
 ] 

Hudson commented on HBASE-12699:


SUCCESS: Integrated in HBase-1.0 #588 (See 
[https://builds.apache.org/job/HBase-1.0/588/])
HBASE-12699 undefined method 'setAsyncLogFlush' exception thrown when setting 
DEFERRED_LOG_FLUSH=true (Stephen Jiang) (tedyu: rev 
a9645e3e97ea06fa0acf277680124535226b4297)
* hbase-shell/src/main/ruby/hbase/admin.rb


 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12645) HBaseTestingUtility is using ${$HOME} for rootDir

2014-12-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248692#comment-14248692
 ] 

Varun Saxena commented on HBASE-12645:
--

Correcting myself.
The flag {{getNewRootDirPathIfExists}} indicates whether to get a new root 
directory path if this flow of getting path has been hit before (whether path 
has been hit or not is done by checking whether dataTestDirOnTestFS is null or 
not)

 HBaseTestingUtility is using ${$HOME} for rootDir
 -

 Key: HBASE-12645
 URL: https://issues.apache.org/jira/browse/HBASE-12645
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 1.0.0
Reporter: Nick Dimiduk
Assignee: Varun Saxena
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12645.002.patch, HBASE-12645.003.patch, 
 HBASE-12645.004.patch, HBASE-12645.004.patch, HBASE-12645.005.patch, 
 HBASE-12645.006.patch, HBASE-12645.006.patch, HBASE-12645.patch


 I noticed this while running tests on branch-1
 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.009 sec  
 FAILURE! - in 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits  Time 
 elapsed: 0.009 sec   ERROR!
 java.io.FileNotFoundException: Destination exists and is not a directory: 
 /homes/hortonnd/hbase
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:423)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:588)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1053)
 at 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits.setupBeforeClass(TestReadOldRootAndMetaEdits.java:70)
 {noformat}
 Either the testing utility has a regression or there's a config regression in 
 this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12644) Visibility Labels: issue with storing super users in labels table

2014-12-16 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248697#comment-14248697
 ] 

Jerry He commented on HBASE-12644:
--

Hi, [~tedyu], [~anoop.hbase]

I do not find an option in the menu for me to add release note.  Probably I do 
not have the permission.
Here is the info that needs to be added.  Feel free to change the wording as 
you desire. Thanks.

-
The system visibility label authorization for super users will no longer be 
persisted in hbase:labels table. Super users will be determined at server 
startup time. They will have all the permissions for Visibility labels.
If you have a prior deployment that had super users' system label persisted in 
hbase:labels, you can clean up by invoking the shell command 'clear_auths'.
For example:  clear_auths 'old_superuser', 'system'
This is particular necessary when you change super users, i.e. a previous super 
user is no longer a super user.

 Visibility Labels: issue with storing super users in labels table
 -

 Key: HBASE-12644
 URL: https://issues.apache.org/jira/browse/HBASE-12644
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.8, 0.99.2
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 1.0.0, 2.0.0

 Attachments: 12644-0.98.patch, HBASE-12644-master-v2.patch, 
 HBASE-12644-master-v3.patch, HBASE-12644-master.patch


 Super users have all the permissions for ACL and Visibility labels.
 They are defined in hbase-site.xml.
 Currently in VisibilityController, we persist super user with their system 
 permission in hbase:labels.
 This makes change in super user difficult.
 There are two issues:
 In the current DefaultVisibilityLabelServiceImpl.addSystemLabel, we only add 
 super user when we initially create the 'system' label.
 No additional update after that even if super user changed. See code for 
 details.
  
 Additionally, there is no mechanism to remove any super user from the labels 
 table.
  
 We probably should not persist super users in the labels table.
 They are in hbase-site.xml and can just stay in labelsCache and used from 
 labelsCache after retrieval by Visibility Controller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12644) Visibility Labels: issue with storing super users in labels table

2014-12-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12644:
---
Release Note: 
The system visibility label authorization for super users will no longer be 
persisted in hbase:labels table. Super users will be determined at server 
startup time. They will have all the permissions for Visibility labels.
If you have a prior deployment that had super users' system label persisted in 
hbase:labels, you can clean up by invoking the shell command 'clear_auths'.
For example: clear_auths 'old_superuser', 'system'
This is particularly necessary when you change super users, i.e. a previous 
super user is no longer a super user.

I filled Release Note with Jerry's description.

When you click on 'Edit' button, you should find 'Release Note' field under 
'Hadoop Flags' field.

Thanks

 Visibility Labels: issue with storing super users in labels table
 -

 Key: HBASE-12644
 URL: https://issues.apache.org/jira/browse/HBASE-12644
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.8, 0.99.2
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 1.0.0, 2.0.0

 Attachments: 12644-0.98.patch, HBASE-12644-master-v2.patch, 
 HBASE-12644-master-v3.patch, HBASE-12644-master.patch


 Super users have all the permissions for ACL and Visibility labels.
 They are defined in hbase-site.xml.
 Currently in VisibilityController, we persist super user with their system 
 permission in hbase:labels.
 This makes change in super user difficult.
 There are two issues:
 In the current DefaultVisibilityLabelServiceImpl.addSystemLabel, we only add 
 super user when we initially create the 'system' label.
 No additional update after that even if super user changed. See code for 
 details.
  
 Additionally, there is no mechanism to remove any super user from the labels 
 table.
  
 We probably should not persist super users in the labels table.
 They are in hbase-site.xml and can just stay in labelsCache and used from 
 labelsCache after retrieval by Visibility Controller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-9431) Set 'hbase.bulkload.retries.number' to 10 as HBASE-8450 claims

2014-12-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9431:
-
   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for fixing my mangled posting [~enis] Pushed to branch-1.

 Set  'hbase.bulkload.retries.number' to 10 as HBASE-8450 claims
 ---

 Key: HBASE-9431
 URL: https://issues.apache.org/jira/browse/HBASE-9431
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 1.0.0, 2.0.0

 Attachments: 9431.txt, hbase-12072_v1.patch, hbase-9431.patch


 HBASE-8450 claimes  'hbase.bulkload.retries.number' is set to 10 when its 
 still 0 ([~jeffreyz] noticed).  Fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-12-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248724#comment-14248724
 ] 

Enis Soztutar commented on HBASE-10201:
---

I don't think we should have this in 1.0.0. I am planning on cutting the RC 
tomorrow, and this seems to be a huge change for the last minute. Can we target 
1.1 instead? 

 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
 Fix For: 1.0.0, 2.0.0

 Attachments: 3149-trunk-v1.txt, HBASE-10201-0.98.patch, 
 HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, HBASE-10201-0.99.patch, 
 HBASE-10201.patch, HBASE-10201_1.patch, HBASE-10201_10.patch, 
 HBASE-10201_11.patch, HBASE-10201_12.patch, HBASE-10201_13.patch, 
 HBASE-10201_13.patch, HBASE-10201_14.patch, HBASE-10201_15.patch, 
 HBASE-10201_16.patch, HBASE-10201_17.patch, HBASE-10201_18.patch, 
 HBASE-10201_19.patch, HBASE-10201_2.patch, HBASE-10201_3.patch, 
 HBASE-10201_4.patch, HBASE-10201_5.patch, HBASE-10201_6.patch, 
 HBASE-10201_7.patch, HBASE-10201_8.patch, HBASE-10201_9.patch, 
 compactions.png, count.png, io.png, memstore.png


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-5162:
---
Attachment: hbase-5162-trunk-v12-committed.patch

Attaching patch for actual code committed to trunk.

Thanks for the copious reviews [~apurtell] and hanging in there.

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-trunk-v0.patch, hbase-5162-trunk-v1.patch, 
 hbase-5162-trunk-v10.patch, hbase-5162-trunk-v11.patch, 
 hbase-5162-trunk-v12-committed.patch, hbase-5162-trunk-v2.patch, 
 hbase-5162-trunk-v3.patch, hbase-5162-trunk-v4.patch, 
 hbase-5162-trunk-v5.patch, hbase-5162-trunk-v6.patch, 
 hbase-5162-trunk-v7.patch, hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-12-16 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248725#comment-14248725
 ] 

Jeffrey Zhong commented on HBASE-10201:
---

Looks good to me(+1) for master branch.  Branch-1 should rely on [~enis]'s 
feedbacks.

 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
 Fix For: 1.0.0, 2.0.0

 Attachments: 3149-trunk-v1.txt, HBASE-10201-0.98.patch, 
 HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, HBASE-10201-0.99.patch, 
 HBASE-10201.patch, HBASE-10201_1.patch, HBASE-10201_10.patch, 
 HBASE-10201_11.patch, HBASE-10201_12.patch, HBASE-10201_13.patch, 
 HBASE-10201_13.patch, HBASE-10201_14.patch, HBASE-10201_15.patch, 
 HBASE-10201_16.patch, HBASE-10201_17.patch, HBASE-10201_18.patch, 
 HBASE-10201_19.patch, HBASE-10201_2.patch, HBASE-10201_3.patch, 
 HBASE-10201_4.patch, HBASE-10201_5.patch, HBASE-10201_6.patch, 
 HBASE-10201_7.patch, HBASE-10201_8.patch, HBASE-10201_9.patch, 
 compactions.png, count.png, io.png, memstore.png


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12645) HBaseTestingUtility is using ${$HOME} for rootDir

2014-12-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248729#comment-14248729
 ] 

Varun Saxena commented on HBASE-12645:
--

[~stack], if the flag is false, {{getDefaultRootDirPath()}} calls below 
function. As you can see, if {{dataTestDirOnTestFS}} is null, directory path is 
created/fetched again.
If not, previous value is retrieved. When the flag is true,  we create a new 
path irrespective of whether dataTestDirOnTestFS is null or not.
{code}
public Path getDataTestDirOnTestFS() throws IOException {
if (dataTestDirOnTestFS == null) {
  setupDataTestDirOnTestFS();
}

return dataTestDirOnTestFS;
  }
{code}

 HBaseTestingUtility is using ${$HOME} for rootDir
 -

 Key: HBASE-12645
 URL: https://issues.apache.org/jira/browse/HBASE-12645
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 1.0.0
Reporter: Nick Dimiduk
Assignee: Varun Saxena
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12645.002.patch, HBASE-12645.003.patch, 
 HBASE-12645.004.patch, HBASE-12645.004.patch, HBASE-12645.005.patch, 
 HBASE-12645.006.patch, HBASE-12645.006.patch, HBASE-12645.patch


 I noticed this while running tests on branch-1
 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.009 sec  
 FAILURE! - in 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits  Time 
 elapsed: 0.009 sec   ERROR!
 java.io.FileNotFoundException: Destination exists and is not a directory: 
 /homes/hortonnd/hbase
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:423)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:588)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1053)
 at 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits.setupBeforeClass(TestReadOldRootAndMetaEdits.java:70)
 {noformat}
 Either the testing utility has a regression or there's a config regression in 
 this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248739#comment-14248739
 ] 

Hudson commented on HBASE-12699:


SUCCESS: Integrated in HBase-TRUNK #5929 (See 
[https://builds.apache.org/job/HBase-TRUNK/5929/])
HBASE-12699 undefined method 'setAsyncLogFlush' exception thrown when setting 
DEFERRED_LOG_FLUSH=true (Stephen Jiang) (tedyu: rev 
1359e87b1757dd32d81fa2039af55156de1b3298)
* hbase-shell/src/main/ruby/hbase/admin.rb


 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248738#comment-14248738
 ] 

Hadoop QA commented on HBASE-5162:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12687543/hbase-5162-trunk-v12-committed.patch
  against master branch at commit a411227b0ebf78b4ee8ae7179e162b54734e77de.
  ATTACHMENT ID: 12687543

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 19 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12100//console

This message is automatically generated.

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-trunk-v0.patch, hbase-5162-trunk-v1.patch, 
 hbase-5162-trunk-v10.patch, hbase-5162-trunk-v11.patch, 
 hbase-5162-trunk-v12-committed.patch, hbase-5162-trunk-v2.patch, 
 hbase-5162-trunk-v3.patch, hbase-5162-trunk-v4.patch, 
 hbase-5162-trunk-v5.patch, hbase-5162-trunk-v6.patch, 
 hbase-5162-trunk-v7.patch, hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248736#comment-14248736
 ] 

Hadoop QA commented on HBASE-12699:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12687519/HBASE-12699.v1.branch-1.patch
  against master branch at commit 96c6b9815ddbc9f2589655df4ad2381af04ac9f8.
  ATTACHMENT ID: 12687519

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+# DEFERRED_LOG_FLUSH is deprecated and was replaced by DURABILITY. 
 To keep backward compatible, it still exists.
+# However, it has to set before DURABILITY so that DURABILITY could 
overwrite if both args are set
+# DEFERRED_LOG_FLUSH is deprecated and was replaced by DURABILITY.  To 
keep backward compatible, it still exists.
+# However, it has to set before DURABILITY so that DURABILITY could 
overwrite if both args are set

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12097//console

This message is automatically generated.

 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during 

[jira] [Updated] (HBASE-10605) Manage the call timeout in the server

2014-12-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10605:
--
Component/s: rpc

 Manage the call timeout in the server
 -

 Key: HBASE-10605
 URL: https://issues.apache.org/jira/browse/HBASE-10605
 Project: HBase
  Issue Type: Improvement
  Components: regionserver, rpc
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 1.0.0


 Since HBASE-10566, we have an explicit call timeout available in the client.
 We could forward it to the server, and use this information for:
 - if the call is still in the queue, just cancel it
 - if the call is under execution, makes this information available in 
 RpcCallContext (actually change the RpcCallContext#disconnectSince to 
 something more generic), so it can be used by the query under execution to 
 stop its execution
 - in the future, interrupt it to manage the case 'stuck on a dead datanode' 
 or something similar
 - if the operation has finished, don't send the reply to the client, as by 
 definition the client is not interested anymore.
 From this, it will be easy to manage the cancellation: 
 disconnect/timeout/cancellation are similar from a service execution PoV



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-5162:
---
Status: Open  (was: Patch Available)

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-trunk-v0.patch, hbase-5162-trunk-v1.patch, 
 hbase-5162-trunk-v10.patch, hbase-5162-trunk-v11.patch, 
 hbase-5162-trunk-v12-committed.patch, hbase-5162-trunk-v2.patch, 
 hbase-5162-trunk-v3.patch, hbase-5162-trunk-v4.patch, 
 hbase-5162-trunk-v5.patch, hbase-5162-trunk-v6.patch, 
 hbase-5162-trunk-v7.patch, hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12644) Visibility Labels: issue with storing super users in labels table

2014-12-16 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248757#comment-14248757
 ] 

Jerry He commented on HBASE-12644:
--

Got it.  Thanks, Ted.

 Visibility Labels: issue with storing super users in labels table
 -

 Key: HBASE-12644
 URL: https://issues.apache.org/jira/browse/HBASE-12644
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.8, 0.99.2
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 1.0.0, 2.0.0

 Attachments: 12644-0.98.patch, HBASE-12644-master-v2.patch, 
 HBASE-12644-master-v3.patch, HBASE-12644-master.patch


 Super users have all the permissions for ACL and Visibility labels.
 They are defined in hbase-site.xml.
 Currently in VisibilityController, we persist super user with their system 
 permission in hbase:labels.
 This makes change in super user difficult.
 There are two issues:
 In the current DefaultVisibilityLabelServiceImpl.addSystemLabel, we only add 
 super user when we initially create the 'system' label.
 No additional update after that even if super user changed. See code for 
 details.
  
 Additionally, there is no mechanism to remove any super user from the labels 
 table.
  
 We probably should not persist super users in the labels table.
 They are in hbase-site.xml and can just stay in labelsCache and used from 
 labelsCache after retrieval by Visibility Controller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-5162:
---
Attachment: hbase-5162-branch-1-v0.patch

Attaching patch for branch-1, [~enis] - thoughts on getting this in?

Also, [~apurtell], would you like a patch for 0.98 as well? I think it should 
be pretty close to 1.0/2.0 version.

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-branch-1-v0.patch, hbase-5162-trunk-v0.patch, 
 hbase-5162-trunk-v1.patch, hbase-5162-trunk-v10.patch, 
 hbase-5162-trunk-v11.patch, hbase-5162-trunk-v12-committed.patch, 
 hbase-5162-trunk-v2.patch, hbase-5162-trunk-v3.patch, 
 hbase-5162-trunk-v4.patch, hbase-5162-trunk-v5.patch, 
 hbase-5162-trunk-v6.patch, hbase-5162-trunk-v7.patch, 
 hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248780#comment-14248780
 ] 

Andrew Purtell commented on HBASE-5162:
---

I have a feeling [~enis] won't take this at this late time for 1.0. What if we 
open a branch (soon) for 1.1. 

I think we can look at this for 0.98 if we don't break non-private interfaces 
and we can get it back into 0.98 from master via any 1.x. 

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-branch-1-v0.patch, hbase-5162-trunk-v0.patch, 
 hbase-5162-trunk-v1.patch, hbase-5162-trunk-v10.patch, 
 hbase-5162-trunk-v11.patch, hbase-5162-trunk-v12-committed.patch, 
 hbase-5162-trunk-v2.patch, hbase-5162-trunk-v3.patch, 
 hbase-5162-trunk-v4.patch, hbase-5162-trunk-v5.patch, 
 hbase-5162-trunk-v6.patch, hbase-5162-trunk-v7.patch, 
 hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-12-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248793#comment-14248793
 ] 

stack commented on HBASE-10201:
---

bq. Can we target 1.1 instead?

Sure.

 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
 Fix For: 1.0.0, 2.0.0

 Attachments: 3149-trunk-v1.txt, HBASE-10201-0.98.patch, 
 HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, HBASE-10201-0.99.patch, 
 HBASE-10201.patch, HBASE-10201_1.patch, HBASE-10201_10.patch, 
 HBASE-10201_11.patch, HBASE-10201_12.patch, HBASE-10201_13.patch, 
 HBASE-10201_13.patch, HBASE-10201_14.patch, HBASE-10201_15.patch, 
 HBASE-10201_16.patch, HBASE-10201_17.patch, HBASE-10201_18.patch, 
 HBASE-10201_19.patch, HBASE-10201_2.patch, HBASE-10201_3.patch, 
 HBASE-10201_4.patch, HBASE-10201_5.patch, HBASE-10201_6.patch, 
 HBASE-10201_7.patch, HBASE-10201_8.patch, HBASE-10201_9.patch, 
 compactions.png, count.png, io.png, memstore.png


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-12-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10201:
--
   Resolution: Fixed
Fix Version/s: (was: 1.0.0)
 Release Note: Adds new flushing policy mechanism. Default, 
org.apache.hadoop.hbase.regionserver.FlushLargeStoresPolicy, will try to avoid 
flushing out the small column families in a region, those whose memstores are  
hbase.hregion.percolumnfamilyflush.size.lower.bound. To restore the old 
behavior of flushes writing out all column families, set 
hbase.regionserver.flush.policy to 
org.apache.hadoop.hbase.regionserver.FlushAllStoresPolicy either in 
hbase-default.xml or on a per-table basis by setting the policy to use with 
HTableDescriptor.getFlushPolicyClassName().
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to master branch

 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
 Fix For: 2.0.0

 Attachments: 3149-trunk-v1.txt, HBASE-10201-0.98.patch, 
 HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, HBASE-10201-0.99.patch, 
 HBASE-10201.patch, HBASE-10201_1.patch, HBASE-10201_10.patch, 
 HBASE-10201_11.patch, HBASE-10201_12.patch, HBASE-10201_13.patch, 
 HBASE-10201_13.patch, HBASE-10201_14.patch, HBASE-10201_15.patch, 
 HBASE-10201_16.patch, HBASE-10201_17.patch, HBASE-10201_18.patch, 
 HBASE-10201_19.patch, HBASE-10201_2.patch, HBASE-10201_3.patch, 
 HBASE-10201_4.patch, HBASE-10201_5.patch, HBASE-10201_6.patch, 
 HBASE-10201_7.patch, HBASE-10201_8.patch, HBASE-10201_9.patch, 
 compactions.png, count.png, io.png, memstore.png


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12700) Backport 'Make flush decisions per column family' from master

2014-12-16 Thread stack (JIRA)
stack created HBASE-12700:
-

 Summary: Backport 'Make flush decisions per column family' from 
master
 Key: HBASE-12700
 URL: https://issues.apache.org/jira/browse/HBASE-12700
 Project: HBase
  Issue Type: Task
Reporter: stack
 Fix For: 1.1.0


Task to backport this nice feature to 1.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-12-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248816#comment-14248816
 ] 

stack commented on HBASE-10201:
---

I forgot to say thank you [~Apache9] for your persistence on getting this in.

 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
 Fix For: 2.0.0

 Attachments: 3149-trunk-v1.txt, HBASE-10201-0.98.patch, 
 HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, HBASE-10201-0.99.patch, 
 HBASE-10201.patch, HBASE-10201_1.patch, HBASE-10201_10.patch, 
 HBASE-10201_11.patch, HBASE-10201_12.patch, HBASE-10201_13.patch, 
 HBASE-10201_13.patch, HBASE-10201_14.patch, HBASE-10201_15.patch, 
 HBASE-10201_16.patch, HBASE-10201_17.patch, HBASE-10201_18.patch, 
 HBASE-10201_19.patch, HBASE-10201_2.patch, HBASE-10201_3.patch, 
 HBASE-10201_4.patch, HBASE-10201_5.patch, HBASE-10201_6.patch, 
 HBASE-10201_7.patch, HBASE-10201_8.patch, HBASE-10201_9.patch, 
 compactions.png, count.png, io.png, memstore.png


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12645) HBaseTestingUtility is using ${$HOME} for rootDir

2014-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248847#comment-14248847
 ] 

Hadoop QA commented on HBASE-12645:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12687527/HBASE-12645.006.patch
  against master branch at commit 1359e87b1757dd32d81fa2039af55156de1b3298.
  ATTACHMENT ID: 12687527

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12099//console

This message is automatically generated.

 HBaseTestingUtility is using ${$HOME} for rootDir
 -

 Key: HBASE-12645
 URL: https://issues.apache.org/jira/browse/HBASE-12645
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 1.0.0
Reporter: Nick Dimiduk
Assignee: Varun Saxena
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12645.002.patch, HBASE-12645.003.patch, 
 HBASE-12645.004.patch, HBASE-12645.004.patch, HBASE-12645.005.patch, 
 HBASE-12645.006.patch, HBASE-12645.006.patch, HBASE-12645.patch


 I noticed this while running tests on branch-1
 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.009 sec  
 FAILURE! - in 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits  Time 
 elapsed: 0.009 sec   ERROR!
 java.io.FileNotFoundException: Destination exists and is not a directory: 
 /homes/hortonnd/hbase
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:423)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:588)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1053)
 at 
 

[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2014-12-16 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248882#comment-14248882
 ] 

Lars Hofhansl commented on HBASE-5954:
--

After thinking about this for a little bit, I think we need to do this by edit 
and column family, the way my patch has it.
Please have a look at this patch, I think we should get this in. Along with 
this we need to document that you *have* to run HDFS with sync-on-close and 
sync-behind-writes. (In fact everybody should do that in any case!)

 Allow proper fsync support for HBase
 

 Key: HBASE-5954
 URL: https://issues.apache.org/jira/browse/HBASE-5954
 Project: HBase
  Issue Type: Improvement
  Components: HFile, wal
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 2.0.0

 Attachments: 5954-WIP-trunk.txt, 5954-trunk-hdfs-trunk-v2.txt, 
 5954-trunk-hdfs-trunk-v3.txt, 5954-trunk-hdfs-trunk-v4.txt, 
 5954-trunk-hdfs-trunk-v5.txt, 5954-trunk-hdfs-trunk-v6.txt, 
 5954-trunk-hdfs-trunk.txt, hbase-hdfs-744.txt


 At least get recommendation into 0.96 doc and some numbers running w/ this 
 hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248893#comment-14248893
 ] 

Enis Soztutar commented on HBASE-12699:
---

Commit to 0.98 as well ? Ping [~apurtell]. 

 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2014-12-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248907#comment-14248907
 ] 

Sean Busbey commented on HBASE-5954:


Can you update the patch to include changes to WALPerformanceEvaluation to 
allow a mix of sync / fsync calls so we can see what the perf impact will look 
like?

 Allow proper fsync support for HBase
 

 Key: HBASE-5954
 URL: https://issues.apache.org/jira/browse/HBASE-5954
 Project: HBase
  Issue Type: Improvement
  Components: HFile, wal
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 2.0.0

 Attachments: 5954-WIP-trunk.txt, 5954-trunk-hdfs-trunk-v2.txt, 
 5954-trunk-hdfs-trunk-v3.txt, 5954-trunk-hdfs-trunk-v4.txt, 
 5954-trunk-hdfs-trunk-v5.txt, 5954-trunk-hdfs-trunk-v6.txt, 
 5954-trunk-hdfs-trunk.txt, hbase-hdfs-744.txt


 At least get recommendation into 0.96 doc and some numbers running w/ this 
 hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8450) Update hbase-default.xml and general recommendations to better suit current hw, h2, experience, etc.

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248920#comment-14248920
 ] 

Hudson commented on HBASE-8450:
---

SUCCESS: Integrated in HBase-1.0 #589 (See 
[https://builds.apache.org/job/HBase-1.0/589/])
HBASE-9431 Set 'hbase.bulkload.retries.number' to 10 as HBASE-8450 claims 
(stack: rev 36269b64038c6fcf076a1d8df90ea087559eb208)
* hbase-common/src/main/resources/hbase-default.xml
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java


 Update hbase-default.xml and general recommendations to better suit current 
 hw, h2, experience, etc.
 

 Key: HBASE-8450
 URL: https://issues.apache.org/jira/browse/HBASE-8450
 Project: HBase
  Issue Type: Task
  Components: Usability
Reporter: stack
Assignee: stack
Priority: Critical
 Fix For: 0.95.1

 Attachments: 8450.txt, 8450v2.txt, 8450v3.txt, 8450v5.txt, 8450v5.txt


 This is a critical task we need to do before we release; review our defaults.
 On cursory review, there are configs in hbase-default.xml that no longer have 
 matching code; there are some that should be changed because we know better 
 now and others that should be amended because hardware and deploys are bigger 
 than they used to be.
 We could also move stuff around so that the must-edit stuff is near the top 
 (zk quorum config. is mid-way down the page) and beef up the descriptions -- 
 especially since these descriptions shine through in the refguide.
 Lastly, I notice that our tgz does not include an hbase-default.xml other 
 than the one bundled up in the jar.  Maybe we should make it more accessible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9431) Set 'hbase.bulkload.retries.number' to 10 as HBASE-8450 claims

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248921#comment-14248921
 ] 

Hudson commented on HBASE-9431:
---

SUCCESS: Integrated in HBase-1.0 #589 (See 
[https://builds.apache.org/job/HBase-1.0/589/])
HBASE-9431 Set 'hbase.bulkload.retries.number' to 10 as HBASE-8450 claims 
(stack: rev 36269b64038c6fcf076a1d8df90ea087559eb208)
* hbase-common/src/main/resources/hbase-default.xml
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java


 Set  'hbase.bulkload.retries.number' to 10 as HBASE-8450 claims
 ---

 Key: HBASE-9431
 URL: https://issues.apache.org/jira/browse/HBASE-9431
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 1.0.0, 2.0.0

 Attachments: 9431.txt, hbase-12072_v1.patch, hbase-9431.patch


 HBASE-8450 claimes  'hbase.bulkload.retries.number' is set to 10 when its 
 still 0 ([~jeffreyz] noticed).  Fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9431) Set 'hbase.bulkload.retries.number' to 10 as HBASE-8450 claims

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248945#comment-14248945
 ] 

Hudson commented on HBASE-9431:
---

FAILURE: Integrated in HBase-TRUNK #5930 (See 
[https://builds.apache.org/job/HBase-TRUNK/5930/])
HBASE-9431 Set 'hbase.bulkload.retries.number' to 10 as HBASE-8450 claims 
(stack: rev e5d813c46b41ab4fb48d72731eb34422f260b81a)
* hbase-common/src/main/resources/hbase-default.xml
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java


 Set  'hbase.bulkload.retries.number' to 10 as HBASE-8450 claims
 ---

 Key: HBASE-9431
 URL: https://issues.apache.org/jira/browse/HBASE-9431
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 1.0.0, 2.0.0

 Attachments: 9431.txt, hbase-12072_v1.patch, hbase-9431.patch


 HBASE-8450 claimes  'hbase.bulkload.retries.number' is set to 10 when its 
 still 0 ([~jeffreyz] noticed).  Fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248943#comment-14248943
 ] 

Hudson commented on HBASE-10201:


FAILURE: Integrated in HBase-TRUNK #5930 (See 
[https://builds.apache.org/job/HBase-TRUNK/5930/])
HBASE-10201 Port 'Make flush decisions per column family' to trunk (stack: rev 
c7fad665f34fd3c17999d5cc60b04d3faff6a7f5)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestFSHLog.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java
* hbase-common/src/main/resources/hbase-default.xml
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFlushRegionEntry.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHeapMemoryManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushLargeStoresPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPerColumnFamilyFlush.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushAllStoresPolicy.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProvider.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushPolicy.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSWALEntry.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushPolicyFactory.java


 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
 Fix For: 2.0.0

 Attachments: 3149-trunk-v1.txt, HBASE-10201-0.98.patch, 
 HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, HBASE-10201-0.99.patch, 
 HBASE-10201.patch, HBASE-10201_1.patch, HBASE-10201_10.patch, 
 HBASE-10201_11.patch, HBASE-10201_12.patch, HBASE-10201_13.patch, 
 HBASE-10201_13.patch, HBASE-10201_14.patch, HBASE-10201_15.patch, 
 HBASE-10201_16.patch, HBASE-10201_17.patch, HBASE-10201_18.patch, 
 HBASE-10201_19.patch, HBASE-10201_2.patch, HBASE-10201_3.patch, 
 HBASE-10201_4.patch, HBASE-10201_5.patch, HBASE-10201_6.patch, 
 HBASE-10201_7.patch, HBASE-10201_8.patch, HBASE-10201_9.patch, 
 compactions.png, count.png, io.png, memstore.png


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248946#comment-14248946
 ] 

Hudson commented on HBASE-5162:
---

FAILURE: Integrated in HBase-TRUNK #5930 (See 
[https://builds.apache.org/job/HBase-TRUNK/5930/])
HBASE-5162 Basic client pushback mechanism (jyates: rev 
a411227b0ebf78b4ee8ae7179e162b54734e77de)
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientExponentialBackoff.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/DelayingRunner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java
* hbase-protocol/src/main/protobuf/Client.proto
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerFactory.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ExponentialClientBackoffPolicy.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/HConnectionTestingUtility.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCaller.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ClientBackoffPolicyFactory.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/StatsTrackingRpcRetryingCaller.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ClientBackoffPolicy.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ServerStatisticTracker.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterConnection.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java
* 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpoint.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultStatsUtil.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestFastFailWithoutTestUtil.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicasClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionAdapter.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ServerStatistics.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RegionCoprocessorRpcChannel.java


 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-branch-1-v0.patch, hbase-5162-trunk-v0.patch, 
 hbase-5162-trunk-v1.patch, hbase-5162-trunk-v10.patch, 
 hbase-5162-trunk-v11.patch, hbase-5162-trunk-v12-committed.patch, 
 hbase-5162-trunk-v2.patch, hbase-5162-trunk-v3.patch, 
 hbase-5162-trunk-v4.patch, hbase-5162-trunk-v5.patch, 
 hbase-5162-trunk-v6.patch, hbase-5162-trunk-v7.patch, 
 hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and 

[jira] [Commented] (HBASE-8450) Update hbase-default.xml and general recommendations to better suit current hw, h2, experience, etc.

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248944#comment-14248944
 ] 

Hudson commented on HBASE-8450:
---

FAILURE: Integrated in HBase-TRUNK #5930 (See 
[https://builds.apache.org/job/HBase-TRUNK/5930/])
HBASE-9431 Set 'hbase.bulkload.retries.number' to 10 as HBASE-8450 claims 
(stack: rev e5d813c46b41ab4fb48d72731eb34422f260b81a)
* hbase-common/src/main/resources/hbase-default.xml
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java


 Update hbase-default.xml and general recommendations to better suit current 
 hw, h2, experience, etc.
 

 Key: HBASE-8450
 URL: https://issues.apache.org/jira/browse/HBASE-8450
 Project: HBase
  Issue Type: Task
  Components: Usability
Reporter: stack
Assignee: stack
Priority: Critical
 Fix For: 0.95.1

 Attachments: 8450.txt, 8450v2.txt, 8450v3.txt, 8450v5.txt, 8450v5.txt


 This is a critical task we need to do before we release; review our defaults.
 On cursory review, there are configs in hbase-default.xml that no longer have 
 matching code; there are some that should be changed because we know better 
 now and others that should be amended because hardware and deploys are bigger 
 than they used to be.
 We could also move stuff around so that the must-edit stuff is near the top 
 (zk quorum config. is mid-way down the page) and beef up the descriptions -- 
 especially since these descriptions shine through in the refguide.
 Lastly, I notice that our tgz does not include an hbase-default.xml other 
 than the one bundled up in the jar.  Maybe we should make it more accessible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12699:
---
Fix Version/s: 0.98.10

Thanks. Cherry picked back and pushed. TestShell passes and I manually tested 
altering a table using both DEFERRED_LOG flush and DURABILITY (separately).

 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248961#comment-14248961
 ] 

Jesse Yates commented on HBASE-5162:


Hmmm, well that's an issue. Looking into the NPE now.

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-branch-1-v0.patch, hbase-5162-trunk-v0.patch, 
 hbase-5162-trunk-v1.patch, hbase-5162-trunk-v10.patch, 
 hbase-5162-trunk-v11.patch, hbase-5162-trunk-v12-committed.patch, 
 hbase-5162-trunk-v2.patch, hbase-5162-trunk-v3.patch, 
 hbase-5162-trunk-v4.patch, hbase-5162-trunk-v5.patch, 
 hbase-5162-trunk-v6.patch, hbase-5162-trunk-v7.patch, 
 hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248968#comment-14248968
 ] 

Andrew Purtell commented on HBASE-5162:
---

Thanks. If it's a minor fix please commit an addendum. If it's more involved, 
we can revert and commit again later. 

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-branch-1-v0.patch, hbase-5162-trunk-v0.patch, 
 hbase-5162-trunk-v1.patch, hbase-5162-trunk-v10.patch, 
 hbase-5162-trunk-v11.patch, hbase-5162-trunk-v12-committed.patch, 
 hbase-5162-trunk-v2.patch, hbase-5162-trunk-v3.patch, 
 hbase-5162-trunk-v4.patch, hbase-5162-trunk-v5.patch, 
 hbase-5162-trunk-v6.patch, hbase-5162-trunk-v7.patch, 
 hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2014-12-16 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248994#comment-14248994
 ] 

Lars Hofhansl commented on HBASE-5954:
--

That's a good idea. Will do.
(In the end... We know there will be a non-trivial impact on rotating disks, 
we're doing this for to guard against power outages).


 Allow proper fsync support for HBase
 

 Key: HBASE-5954
 URL: https://issues.apache.org/jira/browse/HBASE-5954
 Project: HBase
  Issue Type: Improvement
  Components: HFile, wal
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 2.0.0

 Attachments: 5954-WIP-trunk.txt, 5954-trunk-hdfs-trunk-v2.txt, 
 5954-trunk-hdfs-trunk-v3.txt, 5954-trunk-hdfs-trunk-v4.txt, 
 5954-trunk-hdfs-trunk-v5.txt, 5954-trunk-hdfs-trunk-v6.txt, 
 5954-trunk-hdfs-trunk.txt, hbase-hdfs-744.txt


 At least get recommendation into 0.96 doc and some numbers running w/ this 
 hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2014-12-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248998#comment-14248998
 ] 

Sean Busbey commented on HBASE-5954:


Oh certainly. I expect there's going to be a follow on ticket to add docs to 
the ref guide explaining the basic trade off and I want to make sure the code 
bits are in place so that whoever has to write that section can focus on tests 
rather than code gaps.

 Allow proper fsync support for HBase
 

 Key: HBASE-5954
 URL: https://issues.apache.org/jira/browse/HBASE-5954
 Project: HBase
  Issue Type: Improvement
  Components: HFile, wal
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 2.0.0

 Attachments: 5954-WIP-trunk.txt, 5954-trunk-hdfs-trunk-v2.txt, 
 5954-trunk-hdfs-trunk-v3.txt, 5954-trunk-hdfs-trunk-v4.txt, 
 5954-trunk-hdfs-trunk-v5.txt, 5954-trunk-hdfs-trunk-v6.txt, 
 5954-trunk-hdfs-trunk.txt, hbase-hdfs-744.txt


 At least get recommendation into 0.96 doc and some numbers running w/ this 
 hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12690) list_quotas command is failing with not able to load Java class

2014-12-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12690:
---
   Resolution: Fixed
Fix Version/s: (was: 1.0.0)
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Integrated to master branch.

Thanks for the patch, Ashish.

 list_quotas command is failing with not able to load Java class
 ---

 Key: HBASE-12690
 URL: https://issues.apache.org/jira/browse/HBASE-12690
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-12690.patch


 {noformat}
 hbase(main):004:0 list_quotas
 OWNERQUOTAS
 ERROR: cannot load Java class org.apache.hadoop.hbase.client.ConnectionFactor
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249066#comment-14249066
 ] 

Nick Dimiduk commented on HBASE-5162:
-

bq. What if we open a branch (soon) for 1.1.

Why wait for 1.1, any reason this couldn't go into 1.0.1? I mean, if it's okay 
for 0.98.x, why not 1.0.x?

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-branch-1-v0.patch, hbase-5162-trunk-v0.patch, 
 hbase-5162-trunk-v1.patch, hbase-5162-trunk-v10.patch, 
 hbase-5162-trunk-v11.patch, hbase-5162-trunk-v12-committed.patch, 
 hbase-5162-trunk-v2.patch, hbase-5162-trunk-v3.patch, 
 hbase-5162-trunk-v4.patch, hbase-5162-trunk-v5.patch, 
 hbase-5162-trunk-v6.patch, hbase-5162-trunk-v7.patch, 
 hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249079#comment-14249079
 ] 

Andrew Purtell commented on HBASE-5162:
---

Either option sounds good IMHO :-)

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-branch-1-v0.patch, hbase-5162-trunk-v0.patch, 
 hbase-5162-trunk-v1.patch, hbase-5162-trunk-v10.patch, 
 hbase-5162-trunk-v11.patch, hbase-5162-trunk-v12-committed.patch, 
 hbase-5162-trunk-v2.patch, hbase-5162-trunk-v3.patch, 
 hbase-5162-trunk-v4.patch, hbase-5162-trunk-v5.patch, 
 hbase-5162-trunk-v6.patch, hbase-5162-trunk-v7.patch, 
 hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11290) Unlock RegionStates

2014-12-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249086#comment-14249086
 ] 

Andrew Purtell commented on HBASE-11290:


The new LockCache in the v2 patch is reminiscent of HFile's IdLock. Can this be 
refactored so those places share common code? 

(There's RowLock in HRegion also, but that would be significantly more work, 
maybe a future time.)

 Unlock RegionStates
 ---

 Key: HBASE-11290
 URL: https://issues.apache.org/jira/browse/HBASE-11290
 Project: HBase
  Issue Type: Sub-task
Reporter: Francis Liu
Assignee: Virag Kothari
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-11290-0.98.patch, HBASE-11290-0.98_v2.patch, 
 HBASE-11290.draft.patch


 Even though RegionStates is a highly accessed data structure in HMaster. Most 
 of it's methods are synchronized. Which limits concurrency. Even simply 
 making some of the getters non-synchronized by using concurrent data 
 structures has helped with region assignments. We can go as simple as this 
 approach or create locks per region or a bucket lock per region bucket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12694) testTableExistsIfTheSpecifiedTableRegionIsSplitParent in TestSplitTransactionOnCluster class leaves regions in transition

2014-12-16 Thread Vandana Ayyalasomayajula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vandana Ayyalasomayajula reassigned HBASE-12694:


Assignee: Vandana Ayyalasomayajula

 testTableExistsIfTheSpecifiedTableRegionIsSplitParent in 
 TestSplitTransactionOnCluster class leaves regions in transition
 -

 Key: HBASE-12694
 URL: https://issues.apache.org/jira/browse/HBASE-12694
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 0.98.9
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor

 There seems to a clean up issue with the unit test 
 testTableExistsIfTheSpecifiedTableRegionIsSplitParent in 
 TestSplitTransactionOnCluster class. It always has a region in transition. So 
 any unit test that executes after that test, will cause balance method to 
 return false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12694) testTableExistsIfTheSpecifiedTableRegionIsSplitParent in TestSplitTransactionOnCluster class leaves regions in transition

2014-12-16 Thread Vandana Ayyalasomayajula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vandana Ayyalasomayajula updated HBASE-12694:
-
Attachment: HBASE-12694-branch-1.patch

Patch for branch-1.

 testTableExistsIfTheSpecifiedTableRegionIsSplitParent in 
 TestSplitTransactionOnCluster class leaves regions in transition
 -

 Key: HBASE-12694
 URL: https://issues.apache.org/jira/browse/HBASE-12694
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 0.98.9
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Attachments: HBASE-12694-branch-1.patch


 There seems to a clean up issue with the unit test 
 testTableExistsIfTheSpecifiedTableRegionIsSplitParent in 
 TestSplitTransactionOnCluster class. It always has a region in transition. So 
 any unit test that executes after that test, will cause balance method to 
 return false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12699:
---
Attachment: HBASE-12699.v1addendum.patch

 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch, HBASE-12699.v1addendum.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249104#comment-14249104
 ] 

Stephen Yuan Jiang commented on HBASE-12699:


add an addendum to fix the help comments.

 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch, HBASE-12699.v1addendum.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249118#comment-14249118
 ] 

Ted Yu commented on HBASE-12699:


Addendum integrated to branch-1 and master.

 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch, HBASE-12699.v1addendum.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-5162:
---
Attachment: hbase-5162-trunk-addendum.patch

Attaching addedum for trunk.

A couple lines of actual fix, push a little clean that made it easier to 
understand while I was debugging (so necessary if anyone else looks at it 
later).

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-branch-1-v0.patch, 
 hbase-5162-trunk-addendum.patch, hbase-5162-trunk-v0.patch, 
 hbase-5162-trunk-v1.patch, hbase-5162-trunk-v10.patch, 
 hbase-5162-trunk-v11.patch, hbase-5162-trunk-v12-committed.patch, 
 hbase-5162-trunk-v2.patch, hbase-5162-trunk-v3.patch, 
 hbase-5162-trunk-v4.patch, hbase-5162-trunk-v5.patch, 
 hbase-5162-trunk-v6.patch, hbase-5162-trunk-v7.patch, 
 hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249145#comment-14249145
 ] 

Hudson commented on HBASE-12699:


FAILURE: Integrated in HBase-0.98 #747 (See 
[https://builds.apache.org/job/HBase-0.98/747/])
HBASE-12699 undefined method 'setAsyncLogFlush' exception thrown when setting 
DEFERRED_LOG_FLUSH=true (Stephen Jiang) (apurtell: rev 
71c0e5b9de244e0bb96ca4c5b010f981acbda1ee)
* hbase-shell/src/main/ruby/hbase/admin.rb


 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch, HBASE-12699.v1addendum.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-5162:
---
Status: Patch Available  (was: Open)

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-branch-1-v0.patch, 
 hbase-5162-trunk-addendum.patch, hbase-5162-trunk-v0.patch, 
 hbase-5162-trunk-v1.patch, hbase-5162-trunk-v10.patch, 
 hbase-5162-trunk-v11.patch, hbase-5162-trunk-v12-committed.patch, 
 hbase-5162-trunk-v2.patch, hbase-5162-trunk-v3.patch, 
 hbase-5162-trunk-v4.patch, hbase-5162-trunk-v5.patch, 
 hbase-5162-trunk-v6.patch, hbase-5162-trunk-v7.patch, 
 hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12690) list_quotas command is failing with not able to load Java class

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249155#comment-14249155
 ] 

Hudson commented on HBASE-12690:


FAILURE: Integrated in HBase-TRUNK #5931 (See 
[https://builds.apache.org/job/HBase-TRUNK/5931/])
HBASE-12690 list_quotas command is failing with not able to load Java class 
(Ashish) (tedyu: rev 92bc36b762a1e3f8976262169ffdaafcac60a7e7)
* hbase-shell/src/main/ruby/hbase/quotas.rb


 list_quotas command is failing with not able to load Java class
 ---

 Key: HBASE-12690
 URL: https://issues.apache.org/jira/browse/HBASE-12690
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-12690.patch


 {noformat}
 hbase(main):004:0 list_quotas
 OWNERQUOTAS
 ERROR: cannot load Java class org.apache.hadoop.hbase.client.ConnectionFactor
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-12-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10201:
---
Attachment: 10201-addendum.txt

From 
https://builds.apache.org/job/HBase-TRUNK/5931/testReport/org.apache.hadoop.hbase.regionserver/TestPerColumnFamilyFlush/testLogReplayWithDistributedReplay/
 :
{code}
java.lang.IllegalStateException: A mini-cluster is already running
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:865)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:799)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:770)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:757)
at 
org.apache.hadoop.hbase.regionserver.TestPerColumnFamilyFlush.testLogReplay(TestPerColumnFamilyFlush.java:349)
at 
org.apache.hadoop.hbase.regionserver.TestPerColumnFamilyFlush.testLogReplayWithDistributedReplay(TestPerColumnFamilyFlush.java:430)
{code}
Attached addendum changes TestPerColumnFamilyFlush to large test such that 
there would be no collision as shown above in Jenkins build.
TestPerColumnFamilyFlush passed with the addendum.

 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
 Fix For: 2.0.0

 Attachments: 10201-addendum.txt, 3149-trunk-v1.txt, 
 HBASE-10201-0.98.patch, HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, 
 HBASE-10201-0.99.patch, HBASE-10201.patch, HBASE-10201_1.patch, 
 HBASE-10201_10.patch, HBASE-10201_11.patch, HBASE-10201_12.patch, 
 HBASE-10201_13.patch, HBASE-10201_13.patch, HBASE-10201_14.patch, 
 HBASE-10201_15.patch, HBASE-10201_16.patch, HBASE-10201_17.patch, 
 HBASE-10201_18.patch, HBASE-10201_19.patch, HBASE-10201_2.patch, 
 HBASE-10201_3.patch, HBASE-10201_4.patch, HBASE-10201_5.patch, 
 HBASE-10201_6.patch, HBASE-10201_7.patch, HBASE-10201_8.patch, 
 HBASE-10201_9.patch, compactions.png, count.png, io.png, memstore.png


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10278) Provide better write predictability

2014-12-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249181#comment-14249181
 ] 

stack commented on HBASE-10278:
---

[~busbey] Could do the 0.89fb tack first, before this.

How would you implement this in new regime [~busbey]? It changes FSHLog.  You'd 
do a derivative or decorated FSHLog, SwitchingFSHLog?  Can get rid of stuff 
like the 'enabled' flag and checks.

Looking at last patch, pity we couldn't switch to the other writer when rolling 
log on current writer (given rolling takes a while).  Looks like this was a 
consideration: 2241   * NOTE: Don't switch if there is a ongoing log 
roll. Most likely, this could be a redundant
2242   * step.

Patch is worth a study. Pity has to be a syncmonitor but not sure how else 
you'd do it.

On keeping around edits, one implementation, rather than append the WAL 
directly as we do  now, instead, kept the edits in a single list and then did 
bulk appends (IIRC, no advantage doing bulk append over single appends). Edits 
stayed in List until syncs came back to say it was ok let them go.  IIRC, it 
was not that much slower.  It was a little more involved (was easier just doing 
the WAL append  immediately since then we were done) but it might be worth 
considering having a single list of all outstanding edits on other side of the 
ring buffer as store for edits in flight (downside would be extra thread 
coordination)


 Provide better write predictability
 ---

 Key: HBASE-10278
 URL: https://issues.apache.org/jira/browse/HBASE-10278
 Project: HBase
  Issue Type: New Feature
  Components: wal
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Attachments: 10278-trunk-v2.1.patch, 10278-trunk-v2.1.patch, 
 10278-wip-1.1.patch, Multiwaldesigndoc.pdf, SwitchWriterFlow.pptx


 Currently, HBase has one WAL per region server. 
 Whenever there is any latency in the write pipeline (due to whatever reasons 
 such as n/w blip, a node in the pipeline having a bad disk, etc), the overall 
 write latency suffers. 
 Jonathan Hsieh and I analyzed various approaches to tackle this issue. We 
 also looked at HBASE-5699, which talks about adding concurrent multi WALs. 
 Along with performance numbers, we also focussed on design simplicity, 
 minimum impact on MTTR  Replication, and compatibility with 0.96 and 0.98. 
 Considering all these parameters, we propose a new HLog implementation with 
 WAL Switching functionality.
 Please find attached the design doc for the same. It introduces the WAL 
 Switching feature, and experiments/results of a prototype implementation, 
 showing the benefits of this feature.
 The second goal of this work is to serve as a building block for concurrent 
 multiple WALs feature.
 Please review the doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-12-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249188#comment-14249188
 ] 

stack commented on HBASE-10201:
---

That'll work. Thanks.

 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
 Fix For: 2.0.0

 Attachments: 10201-addendum.txt, 3149-trunk-v1.txt, 
 HBASE-10201-0.98.patch, HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, 
 HBASE-10201-0.99.patch, HBASE-10201.patch, HBASE-10201_1.patch, 
 HBASE-10201_10.patch, HBASE-10201_11.patch, HBASE-10201_12.patch, 
 HBASE-10201_13.patch, HBASE-10201_13.patch, HBASE-10201_14.patch, 
 HBASE-10201_15.patch, HBASE-10201_16.patch, HBASE-10201_17.patch, 
 HBASE-10201_18.patch, HBASE-10201_19.patch, HBASE-10201_2.patch, 
 HBASE-10201_3.patch, HBASE-10201_4.patch, HBASE-10201_5.patch, 
 HBASE-10201_6.patch, HBASE-10201_7.patch, HBASE-10201_8.patch, 
 HBASE-10201_9.patch, compactions.png, count.png, io.png, memstore.png


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-12-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249191#comment-14249191
 ] 

Ted Yu commented on HBASE-10201:


Addendum pushed to master branch.

 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
 Fix For: 2.0.0

 Attachments: 10201-addendum.txt, 3149-trunk-v1.txt, 
 HBASE-10201-0.98.patch, HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, 
 HBASE-10201-0.99.patch, HBASE-10201.patch, HBASE-10201_1.patch, 
 HBASE-10201_10.patch, HBASE-10201_11.patch, HBASE-10201_12.patch, 
 HBASE-10201_13.patch, HBASE-10201_13.patch, HBASE-10201_14.patch, 
 HBASE-10201_15.patch, HBASE-10201_16.patch, HBASE-10201_17.patch, 
 HBASE-10201_18.patch, HBASE-10201_19.patch, HBASE-10201_2.patch, 
 HBASE-10201_3.patch, HBASE-10201_4.patch, HBASE-10201_5.patch, 
 HBASE-10201_6.patch, HBASE-10201_7.patch, HBASE-10201_8.patch, 
 HBASE-10201_9.patch, compactions.png, count.png, io.png, memstore.png


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-12-16 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249199#comment-14249199
 ] 

zhangduo commented on HBASE-10201:
--

{quote}
I forgot to say thank you zhangduo for your persistence on getting this in.
{quote}

It's my pleasure to contribute code to a famous project:)
Thanks


 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
 Fix For: 2.0.0

 Attachments: 10201-addendum.txt, 3149-trunk-v1.txt, 
 HBASE-10201-0.98.patch, HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, 
 HBASE-10201-0.99.patch, HBASE-10201.patch, HBASE-10201_1.patch, 
 HBASE-10201_10.patch, HBASE-10201_11.patch, HBASE-10201_12.patch, 
 HBASE-10201_13.patch, HBASE-10201_13.patch, HBASE-10201_14.patch, 
 HBASE-10201_15.patch, HBASE-10201_16.patch, HBASE-10201_17.patch, 
 HBASE-10201_18.patch, HBASE-10201_19.patch, HBASE-10201_2.patch, 
 HBASE-10201_3.patch, HBASE-10201_4.patch, HBASE-10201_5.patch, 
 HBASE-10201_6.patch, HBASE-10201_7.patch, HBASE-10201_8.patch, 
 HBASE-10201_9.patch, compactions.png, count.png, io.png, memstore.png


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12682) compaction thread throttle value is not correct in hbase-default.xml

2014-12-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12682:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master. Thanks for the patch [~jerryhe]

 compaction thread throttle value is not correct in hbase-default.xml
 

 Key: HBASE-12682
 URL: https://issues.apache.org/jira/browse/HBASE-12682
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 2.0.0

 Attachments: HBASE-12682-master.patch


 While doing some compaction tuning and looking up some of the parameters, I 
 noticed that hbase.regionserver.thread.compaction.throttle default value in 
 the documentation and in hbase-site.xml seems to be off the mark.
 {code}
   property
 namehbase.regionserver.thread.compaction.throttle/name
 value2560/value
 descriptionThere are two different thread pools for compactions, one 
 for large compactions and
   the other for small compactions. This helps to keep compaction of lean 
 tables (such as
 systemitemhbase:meta/systemitem) fast. If a compaction is larger 
 than this threshold, it
   goes into the large compaction pool. In most cases, the default value 
 is appropriate. Default:
   2 x hbase.hstore.compaction.max x hbase.hregion.memstore.flush.size 
 (which defaults to 128).
   The value field assumes that the value of 
 hbase.hregion.memstore.flush.size is unchanged from
   the default./description
   /property
 {code}
 It should be in bytes. Or am I missing anything?
 It is not a problem in 0.98 since the property is not in hbase-site.xml over 
 there. It is obtained dynamically:
 {code}
 throttlePoint =  conf.getLong(hbase.regionserver.thread.compaction.throttle,
   2 * maxFilesToCompact * storeConfigInfo.getMemstoreFlushSize());
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249215#comment-14249215
 ] 

Hudson commented on HBASE-12699:


FAILURE: Integrated in HBase-TRUNK #5932 (See 
[https://builds.apache.org/job/HBase-TRUNK/5932/])
HBASE-12699 Addendum modifies shell help (Stephen Jiang) (tedyu: rev 
889675fb9c83e85edebc31f309c8e03dccfff5ab)
* 12699.v1addendum.patch
HBASE-12699 Addendum modifies shell help (Stephen Jiang) (tedyu: rev 
9e7f7211b95ef4e9c64f5e54411c7c9fa7eeb235)
* hbase-shell/src/main/ruby/shell/commands/alter.rb
* 12699.v1addendum.patch


 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch, HBASE-12699.v1addendum.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12645) HBaseTestingUtility is using ${$HOME} for rootDir

2014-12-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12645:
--
Attachment: 12645.v7.txt

Something like this [~varun_saxena]?

 HBaseTestingUtility is using ${$HOME} for rootDir
 -

 Key: HBASE-12645
 URL: https://issues.apache.org/jira/browse/HBASE-12645
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 1.0.0
Reporter: Nick Dimiduk
Assignee: Varun Saxena
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: 12645.v7.txt, HBASE-12645.002.patch, 
 HBASE-12645.003.patch, HBASE-12645.004.patch, HBASE-12645.004.patch, 
 HBASE-12645.005.patch, HBASE-12645.006.patch, HBASE-12645.006.patch, 
 HBASE-12645.patch


 I noticed this while running tests on branch-1
 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.009 sec  
 FAILURE! - in 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits  Time 
 elapsed: 0.009 sec   ERROR!
 java.io.FileNotFoundException: Destination exists and is not a directory: 
 /homes/hortonnd/hbase
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:423)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:588)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1053)
 at 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits.setupBeforeClass(TestReadOldRootAndMetaEdits.java:70)
 {noformat}
 Either the testing utility has a regression or there's a config regression in 
 this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249230#comment-14249230
 ] 

Hudson commented on HBASE-12699:


FAILURE: Integrated in HBase-1.0 #590 (See 
[https://builds.apache.org/job/HBase-1.0/590/])
HBASE-12699 Addendum modifies shell help (Stephen Jiang) (tedyu: rev 
bb1b2f4eebc683a2a7aa80b341664ba8129f5080)
* hbase-shell/src/main/ruby/shell/commands/alter.rb


 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch, HBASE-12699.v1addendum.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12699) undefined method `setAsyncLogFlush' exception thrown when setting DEFERRED_LOG_FLUSH=true

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249234#comment-14249234
 ] 

Hudson commented on HBASE-12699:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #713 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/713/])
HBASE-12699 undefined method 'setAsyncLogFlush' exception thrown when setting 
DEFERRED_LOG_FLUSH=true (Stephen Jiang) (apurtell: rev 
71c0e5b9de244e0bb96ca4c5b010f981acbda1ee)
* hbase-shell/src/main/ruby/hbase/admin.rb


 undefined method `setAsyncLogFlush' exception thrown when setting 
 DEFERRED_LOG_FLUSH=true
 --

 Key: HBASE-12699
 URL: https://issues.apache.org/jira/browse/HBASE-12699
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0, 0.99.2
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-12699.v1.branch-1.patch, 
 HBASE-12699.v1.master.patch, HBASE-12699.v1addendum.patch

   Original Estimate: 24h
  Time Spent: 4h
  Remaining Estimate: 1h

 In hbase shell, when trying to set DEFERRED_LOG_FLUSH during create or alter, 
 an undefined method `setAsyncLogFlush' exception was thrown.  
 This is due to that DEFERRED_LOG_FLUSH was deprecated and the 
 setAsyncLogFlush method was removed.  It was replaced by DURABILITY.
 DEFERRED_LOG_FLUSH=true is the same as DURABILITY='ASYNC_WAL'
 The default is DURABILITY='SYNC_WAL', which is the same as the default 
 DEFERRED_LOG_FLUSH=false
 We should ask user to use the DURABILITY setting.  In the meantime, for 
 backward compatibility, the hbase shell should still allow setting 
 DEFERRED_LOG_FLUSH.  Internally, instead of calling setAsyncLogFlush, it 
 should call setDurability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12664) TestDefaultLoadBalancer.testBalanceCluster fails in CachedDNSToSwitchMapping

2014-12-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-12664.
---
Resolution: Fixed

Resolving again since we seem to be past this disaster commit.  Thanks for the 
bail out [~apurtell] and [~ndimiduk]

 TestDefaultLoadBalancer.testBalanceCluster fails in CachedDNSToSwitchMapping
 

 Key: HBASE-12664
 URL: https://issues.apache.org/jira/browse/HBASE-12664
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: stack
Assignee: stack
Priority: Minor
 Fix For: 1.0.0, 2.0.0

 Attachments: 12664.txt, HBASE-12664-0.99-addendum.patch


 https://builds.apache.org/job/HBase-TRUNK/5893/testReport/org.apache.hadoop.hbase.master.balancer/TestDefaultLoadBalancer/testBalanceCluster/
 {code}
 java.lang.Exception: test timed out after 6 milliseconds
   at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
   at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:894)
   at 
 java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1286)
   at java.net.InetAddress.getAllByName0(InetAddress.java:1239)
   at java.net.InetAddress.getAllByName(InetAddress.java:1155)
   at java.net.InetAddress.getAllByName(InetAddress.java:1091)
   at java.net.InetAddress.getByName(InetAddress.java:1041)
   at org.apache.hadoop.net.NetUtils.normalizeHostName(NetUtils.java:561)
   at org.apache.hadoop.net.NetUtils.normalizeHostNames(NetUtils.java:578)
   at 
 org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:109)
   at 
 org.apache.hadoop.hbase.master.RackManager.getRack(RackManager.java:66)
   at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer$Cluster.init(BaseLoadBalancer.java:274)
   at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer$Cluster.init(BaseLoadBalancer.java:152)
   at 
 org.apache.hadoop.hbase.master.balancer.SimpleLoadBalancer.balanceCluster(SimpleLoadBalancer.java:201)
   at 
 org.apache.hadoop.hbase.master.balancer.TestDefaultLoadBalancer.testBalanceCluster(TestDefaultLoadBalancer.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12641) Grant all permissions of hbase zookeeper node to hbase superuser in a secure cluster

2014-12-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249240#comment-14249240
 ] 

stack commented on HBASE-12641:
---

[~apurtell] You good w/ the above?

 Grant all permissions of hbase zookeeper node to hbase superuser in a secure 
 cluster
 

 Key: HBASE-12641
 URL: https://issues.apache.org/jira/browse/HBASE-12641
 Project: HBase
  Issue Type: Improvement
  Components: Zookeeper
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-12641-v1.diff


 Currently in a secure cluster, only the master/regionserver kerberos user can 
 manage the znode of hbase. But he master/regionserver kerberos user is for 
 rpc connection and we usually use another super user to manage the cluster.
 In some special scenarios, we need to manage the data of znode with the 
 supper user.
 eg: 
 a, To get the data of the znode for debugging.
 b, HBASE-8253: We need to delete the znode for the corrupted hlog to avoid it 
 block the replication.
 So we grant all permissions of hbase zookeeper node to hbase superuser during 
 creating these znodes.
 Suggestions are welcomed.
 [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249254#comment-14249254
 ] 

Hadoop QA commented on HBASE-5162:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12687610/hbase-5162-trunk-addendum.patch
  against master branch at commit 9e7f7211b95ef4e9c64f5e54411c7c9fa7eeb235.
  ATTACHMENT ID: 12687610

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12101//console

This message is automatically generated.

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-branch-1-v0.patch, 
 hbase-5162-trunk-addendum.patch, hbase-5162-trunk-v0.patch, 
 hbase-5162-trunk-v1.patch, hbase-5162-trunk-v10.patch, 
 hbase-5162-trunk-v11.patch, hbase-5162-trunk-v12-committed.patch, 
 hbase-5162-trunk-v2.patch, hbase-5162-trunk-v3.patch, 
 hbase-5162-trunk-v4.patch, hbase-5162-trunk-v5.patch, 
 hbase-5162-trunk-v6.patch, hbase-5162-trunk-v7.patch, 
 hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 

[jira] [Commented] (HBASE-5162) Basic client pushback mechanism

2014-12-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249258#comment-14249258
 ] 

Ted Yu commented on HBASE-5162:
---

+1

 Basic client pushback mechanism
 ---

 Key: HBASE-5162
 URL: https://issues.apache.org/jira/browse/HBASE-5162
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Assignee: Jesse Yates
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-5162-branch-1-v0.patch, 
 hbase-5162-trunk-addendum.patch, hbase-5162-trunk-v0.patch, 
 hbase-5162-trunk-v1.patch, hbase-5162-trunk-v10.patch, 
 hbase-5162-trunk-v11.patch, hbase-5162-trunk-v12-committed.patch, 
 hbase-5162-trunk-v2.patch, hbase-5162-trunk-v3.patch, 
 hbase-5162-trunk-v4.patch, hbase-5162-trunk-v5.patch, 
 hbase-5162-trunk-v6.patch, hbase-5162-trunk-v7.patch, 
 hbase-5162-trunk-v8.patch, java_HBASE-5162.patch


 The current blocking we do when we are close to some limits (memstores over 
 the multiplier factor, too many store files, global memstore memory) is bad, 
 too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
 that we need something better.
 I did a little brainstorm with Stack, we came up quickly with two solutions:
  - Send some exception to the client, like OverloadedException, that's thrown 
 when some situation happens like getting past the low memory barrier. It 
 would be thrown when the client gets a handler and does some check while 
 putting or deleting. The client would treat this a retryable exception but 
 ideally wouldn't check .META. for a new location. It could be fancy and have 
 multiple levels of pushback, like send the exception to 25% of the clients, 
 and then go up if the situation persists. Should be easy to implement but 
 we'll be using a lot more IO to send the payload over and over again (but at 
 least it wouldn't sit in the RS's memory).
  - Send a message alongside a successful put or delete to tell the client to 
 slow down a little, this way we don't have to do back and forth with the 
 payload between the client and the server. It's a cleaner (I think) but more 
 involved solution.
 In every case the RS should do very obvious things to notify the operators of 
 this situation, through logs, web UI, metrics, etc.
 Other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12121) maven release plugin does not allow for customized goals

2014-12-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12121:
--
   Resolution: Fixed
Fix Version/s: 0.98.10
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to 0.98+

 maven release plugin does not allow for customized goals
 

 Key: HBASE-12121
 URL: https://issues.apache.org/jira/browse/HBASE-12121
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.98.6
Reporter: Enoch Hsu
Assignee: Enoch Hsu
Priority: Minor
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-12121.patch


 Inside the pom under the maven-release-plugin there is a configuration that 
 defines what the release-plugin uses like so
 configuration
 !--You need this profile. It'll sign your artifacts.
 I'm not sure if this config. actually works though.
 I've been specifying -Papache-release on the command-line
 --
 releaseProfilesapache-release/releaseProfiles
 !--This stops our running tests for each stage of maven release.
 But it builds the test jar. From SUREFIRE-172.
 --
 arguments-Dmaven.test.skip.exec/arguments
 pomFileNamepom.xml/pomFileName
 /configuration
 There is no property for goals so if the user passes in a goal from the 
 command line it will not get executed and the default behavior will be used 
 instead.
 I propose to add in the following
 goals${goals}/goals
 This will allow custom release goal options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12682) compaction thread throttle value is not correct in hbase-default.xml

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249274#comment-14249274
 ] 

Hudson commented on HBASE-12682:


FAILURE: Integrated in HBase-TRUNK #5933 (See 
[https://builds.apache.org/job/HBase-TRUNK/5933/])
HBASE-12682 compaction thread throttle value is not correct in 
hbase-default.xml (Jerry He) (stack: rev 
d845a92fc8007085999c8beebca7e7b00ca5322e)
* hbase-common/src/main/resources/hbase-default.xml


 compaction thread throttle value is not correct in hbase-default.xml
 

 Key: HBASE-12682
 URL: https://issues.apache.org/jira/browse/HBASE-12682
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 2.0.0

 Attachments: HBASE-12682-master.patch


 While doing some compaction tuning and looking up some of the parameters, I 
 noticed that hbase.regionserver.thread.compaction.throttle default value in 
 the documentation and in hbase-site.xml seems to be off the mark.
 {code}
   property
 namehbase.regionserver.thread.compaction.throttle/name
 value2560/value
 descriptionThere are two different thread pools for compactions, one 
 for large compactions and
   the other for small compactions. This helps to keep compaction of lean 
 tables (such as
 systemitemhbase:meta/systemitem) fast. If a compaction is larger 
 than this threshold, it
   goes into the large compaction pool. In most cases, the default value 
 is appropriate. Default:
   2 x hbase.hstore.compaction.max x hbase.hregion.memstore.flush.size 
 (which defaults to 128).
   The value field assumes that the value of 
 hbase.hregion.memstore.flush.size is unchanged from
   the default./description
   /property
 {code}
 It should be in bytes. Or am I missing anything?
 It is not a problem in 0.98 since the property is not in hbase-site.xml over 
 there. It is obtained dynamically:
 {code}
 throttlePoint =  conf.getLong(hbase.regionserver.thread.compaction.throttle,
   2 * maxFilesToCompact * storeConfigInfo.getMemstoreFlushSize());
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249275#comment-14249275
 ] 

Hudson commented on HBASE-10201:


FAILURE: Integrated in HBase-TRUNK #5933 (See 
[https://builds.apache.org/job/HBase-TRUNK/5933/])
HBASE-10201 Addendum changes TestPerColumnFamilyFlush to LargeTest (tedyu: rev 
885b065683499540f467cb54086a3f60e64b9c8a)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPerColumnFamilyFlush.java


 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
 Fix For: 2.0.0

 Attachments: 10201-addendum.txt, 3149-trunk-v1.txt, 
 HBASE-10201-0.98.patch, HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, 
 HBASE-10201-0.99.patch, HBASE-10201.patch, HBASE-10201_1.patch, 
 HBASE-10201_10.patch, HBASE-10201_11.patch, HBASE-10201_12.patch, 
 HBASE-10201_13.patch, HBASE-10201_13.patch, HBASE-10201_14.patch, 
 HBASE-10201_15.patch, HBASE-10201_16.patch, HBASE-10201_17.patch, 
 HBASE-10201_18.patch, HBASE-10201_19.patch, HBASE-10201_2.patch, 
 HBASE-10201_3.patch, HBASE-10201_4.patch, HBASE-10201_5.patch, 
 HBASE-10201_6.patch, HBASE-10201_7.patch, HBASE-10201_8.patch, 
 HBASE-10201_9.patch, compactions.png, count.png, io.png, memstore.png


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11412) Minimize a number of hbase-client transitive dependencies

2014-12-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11412:
--
Status: Patch Available  (was: Open)

Just ran into this over in htrace.  Our list of transitive includes is a little 
on the obscene side.

 Minimize a number of hbase-client transitive dependencies
 -

 Key: HBASE-11412
 URL: https://issues.apache.org/jira/browse/HBASE-11412
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.98.3
Reporter: Sergey Beryozkin
Priority: Minor
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase11412.txt


 hbase-client has a number of transitive dependencies not needed for a client 
 mode execution. In my test I've added the following exclusions:
 {code:xml}
 exclusions
 exclusion
  groupIdcom.sun.jersey/groupId
  artifactIdjersey-server/artifactId   
   /exclusion
 exclusion
  groupIdcom.sun.jersey/groupId
  artifactIdjersey-core/artifactId 
   /exclusion
 exclusion
  groupIdcom.sun.jersey/groupId
  artifactIdjersey-json/artifactId 
   /exclusion
 exclusion
  groupIdcom.sun.jersey.contribs/groupId
  artifactIdjersey-guice/artifactId
   /exclusion
 exclusion
  groupIdcom.google.inject/groupId
  artifactIdguice/artifactId   
   /exclusion
 exclusion
  groupIdcom.google.inject.extensions/groupId
  artifactIdguice-servlet/artifactId   
   /exclusion
 exclusion
  groupIdorg.mortbay.jetty/groupId
  artifactIdjetty/artifactId   
   /exclusion
 exclusion
  groupIdorg.mortbay.jetty/groupId
  artifactIdjetty-util/artifactId  
   /exclusion
 exclusion
  groupIdcommons-httpclient/groupId
  artifactIdcommons-httpclient/artifactId  
   /exclusion
/exclusions
 {code}
 Proposal: add related exclusions to some of the dependencies hbase-client 
 depends upon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12652) Allow unmanaged connections in MetaTableAccessor

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249283#comment-14249283
 ] 

Hudson commented on HBASE-12652:


FAILURE: Integrated in HBase-1.0 #591 (See 
[https://builds.apache.org/job/HBase-1.0/591/])
Revert HBASE-12652 Allow unmanaged connections in MetaTableAccessor (Solomon 
Duskis) (stack: rev 4587a693b0e61531821e716a4e2292b163c59770)
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableAccessor.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterConnection.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWithTags.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionAdapter.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/HConnectionTestingUtility.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java


 Allow unmanaged connections in MetaTableAccessor
 

 Key: HBASE-12652
 URL: https://issues.apache.org/jira/browse/HBASE-12652
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12652.patch, HBASE-12652.patch, 
 HBASE-12652B.patch, HBASE-12652C-0.99.patch, HBASE-12652C.patch


 Passing in an unmanaged connnection to MetaTableAccessor causes an exception. 
  MetaTableAccessor should be able to use both unmanaged and managed 
 connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11412) Minimize a number of hbase-client transitive dependencies

2014-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249286#comment-14249286
 ] 

Hadoop QA commented on HBASE-11412:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12682868/hbase11412.txt
  against master branch at commit 5b5c981d954e0e7769e486039371c4603d2ef915.
  ATTACHMENT ID: 12682868

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[44,32]
 package org.apache.hadoop.mapred does not exist
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[45,35]
 package org.apache.hadoop.mapreduce does not exist
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[191,67]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[212,18]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[248,46]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[269,69]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[305,66]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[325,71]
 cannot find symbol
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project hbase-client: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[44,32]
 package org.apache.hadoop.mapred does not exist
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[45,35]
 package org.apache.hadoop.mapreduce does not exist
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[191,67]
 cannot find symbol
[ERROR] symbol:   class Job
[ERROR] location: class org.apache.hadoop.hbase.security.token.TokenUtil
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[212,18]
 cannot find symbol
[ERROR] symbol:   class Job
[ERROR] location: class org.apache.hadoop.hbase.security.token.TokenUtil
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[248,46]
 cannot find symbol
[ERROR] symbol:   class JobConf
[ERROR] location: class org.apache.hadoop.hbase.security.token.TokenUtil
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[269,69]
 cannot find symbol
[ERROR] symbol:   class JobConf
[ERROR] location: class org.apache.hadoop.hbase.security.token.TokenUtil
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[305,66]
 cannot find symbol
[ERROR] symbol:   class JobConf
[ERROR] location: class org.apache.hadoop.hbase.security.token.TokenUtil
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java:[325,71]
 cannot find symbol
[ERROR] symbol:   class Job
[ERROR] location: class org.apache.hadoop.hbase.security.token.TokenUtil
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.

[jira] [Commented] (HBASE-12121) maven release plugin does not allow for customized goals

2014-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249284#comment-14249284
 ] 

Hudson commented on HBASE-12121:


FAILURE: Integrated in HBase-1.0 #591 (See 
[https://builds.apache.org/job/HBase-1.0/591/])
HBASE-12121 maven release plugin does not allow for customized goals (Enoch 
Hsu) (stack: rev 0635705351b896c88f6ac29be06428fe184843c9)
* pom.xml


 maven release plugin does not allow for customized goals
 

 Key: HBASE-12121
 URL: https://issues.apache.org/jira/browse/HBASE-12121
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.98.6
Reporter: Enoch Hsu
Assignee: Enoch Hsu
Priority: Minor
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-12121.patch


 Inside the pom under the maven-release-plugin there is a configuration that 
 defines what the release-plugin uses like so
 configuration
 !--You need this profile. It'll sign your artifacts.
 I'm not sure if this config. actually works though.
 I've been specifying -Papache-release on the command-line
 --
 releaseProfilesapache-release/releaseProfiles
 !--This stops our running tests for each stage of maven release.
 But it builds the test jar. From SUREFIRE-172.
 --
 arguments-Dmaven.test.skip.exec/arguments
 pomFileNamepom.xml/pomFileName
 /configuration
 There is no property for goals so if the user passes in a goal from the 
 command line it will not get executed and the default behavior will be used 
 instead.
 I propose to add in the following
 goals${goals}/goals
 This will allow custom release goal options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12688) Update site with a bootstrap-based UI

2014-12-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249301#comment-14249301
 ] 

Nick Dimiduk commented on HBASE-12688:
--

For those interested, here's the results: 
http://people.apache.org/~ndimiduk/site/

 Update site with a bootstrap-based UI
 -

 Key: HBASE-12688
 URL: https://issues.apache.org/jira/browse/HBASE-12688
 Project: HBase
  Issue Type: Bug
  Components: site
Affects Versions: 2.0.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0

 Attachments: HBASE-12688.00-reflow.patch, HBASE-12688.00.patch


 Looks like we can upgrade our look pretty cheaply by just swapping to a 
 different skin. This fluido-skin uses Twitter Bootstrap. It's an older 2.x 
 version (upstream has moved onto 3.x), but it's a start. There's some 
 out-of-the-box configuration choices regarding menu bar location. We can also 
 look into some of our own custom CSS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12688) Update site with a bootstrap-based UI

2014-12-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249319#comment-14249319
 ] 

stack commented on HBASE-12688:
---

Suggest choosing a style that is more plain, 'spacelab', or 'flatly'.  I think 
if you let version and published go left, the gray bar under them will get 
fatter and fit them better (least I had same problem over on 
http://htrace.incubator.apache.org/ where I followed your lead here and letting 
them go left seemed to 'fix' the background thing).  We can figure how to get 
the search back later.  Looks great.

 Update site with a bootstrap-based UI
 -

 Key: HBASE-12688
 URL: https://issues.apache.org/jira/browse/HBASE-12688
 Project: HBase
  Issue Type: Bug
  Components: site
Affects Versions: 2.0.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0

 Attachments: HBASE-12688.00-reflow.patch, HBASE-12688.00.patch


 Looks like we can upgrade our look pretty cheaply by just swapping to a 
 different skin. This fluido-skin uses Twitter Bootstrap. It's an older 2.x 
 version (upstream has moved onto 3.x), but it's a start. There's some 
 out-of-the-box configuration choices regarding menu bar location. We can also 
 look into some of our own custom CSS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12645) HBaseTestingUtility is using ${$HOME} for rootDir

2014-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249339#comment-14249339
 ] 

Hadoop QA commented on HBASE-12645:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12687624/12645.v7.txt
  against master branch at commit d845a92fc8007085999c8beebca7e7b00ca5322e.
  ATTACHMENT ID: 12687624

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestClientPushback

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12102//console

This message is automatically generated.

 HBaseTestingUtility is using ${$HOME} for rootDir
 -

 Key: HBASE-12645
 URL: https://issues.apache.org/jira/browse/HBASE-12645
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 1.0.0
Reporter: Nick Dimiduk
Assignee: Varun Saxena
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: 12645.v7.txt, HBASE-12645.002.patch, 
 HBASE-12645.003.patch, HBASE-12645.004.patch, HBASE-12645.004.patch, 
 HBASE-12645.005.patch, HBASE-12645.006.patch, HBASE-12645.006.patch, 
 HBASE-12645.patch


 I noticed this while running tests on branch-1
 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.009 sec  
 FAILURE! - in 
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits
 org.apache.hadoop.hbase.regionserver.wal.TestReadOldRootAndMetaEdits  Time 
 elapsed: 0.009 sec   ERROR!
 java.io.FileNotFoundException: Destination exists and is not a directory: 
 /homes/hortonnd/hbase
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:423)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:588)
 at 
 

  1   2   >