[jira] [Commented] (HBASE-16234) Expect and handle nulls when assigning replicas
[ https://issues.apache.org/jira/browse/HBASE-16234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378889#comment-15378889 ] Heng Chen commented on HBASE-16234: --- Yeah, master has the same issue. Do you want to upload a patch? > Expect and handle nulls when assigning replicas > --- > > Key: HBASE-16234 > URL: https://issues.apache.org/jira/browse/HBASE-16234 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.2.0 >Reporter: Harsh J > > Observed this on a cluster: > {code} > FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting > shutdown. > java.lang.NullPointerException > at > org.apache.hadoop.hbase.master.AssignmentManager.replicaRegionsNotRecordedInMeta(AssignmentManager.java:2799) > > at > org.apache.hadoop.hbase.master.AssignmentManager.assignAllUserRegions(AssignmentManager.java:2778) > > at > org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:638) > > at > org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:485) > > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:723) > > at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169) > at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1481) > at java.lang.Thread.run(Thread.java:745) > {code} > It looks like {{FSTableDescriptors#get(…)}} can be expected to return null in > some cases, but {{AssignmentManager.replicaRegionsNotRecordedInMeta(…)}} does > not currently have any handling for such a possibility. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.
[ https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378885#comment-15378885 ] Hiroshi Ikeda commented on HBASE-16210: --- It seems better to use enum like TimeUnit. No doubt naming the class "Timestamp" is confusing. Is it better to use TimestampType or TimestampUnit (like TimeUnit) or something? > Add Timestamp class to the hbase-common and Timestamp type to HTable. > - > > Key: HBASE-16210 > URL: https://issues.apache.org/jira/browse/HBASE-16210 > Project: HBase > Issue Type: Sub-task >Reporter: Sai Teja Ranuva >Assignee: Sai Teja Ranuva >Priority: Minor > Labels: patch, testing > Attachments: HBASE-16210.master.1.patch, HBASE-16210.master.2.patch, > HBASE-16210.master.3.patch, HBASE-16210.master.4.patch, > HBASE-16210.master.5.patch, HBASE-16210.master.6.patch, > HBASE-16210.master.7.patch > > > This is a sub-issue of > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is > a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. > The main idea of HLC is described in > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with > the motivation of adding it to HBase. > What is this patch/issue about ? > This issue attempts to add a timestamp class to hbase-common and timestamp > type to HTable. > This is a part of the attempt to get HLC into HBase. This patch does not > interfere with the current working of HBase. > Why Timestamp Class ? > Timestamp class can be as an abstraction to represent time in Hbase in 64 > bits. > It is just used for manipulating with the 64 bits of the timestamp and is not > concerned about the actual time. > There are three types of timestamps. System time, Custom and HLC. Each one of > it has methods to manipulate the 64 bits of timestamp. > HTable changes: Added a timestamp type property to HTable. This will help > HBase exist in conjunction with old type of timestamp and also the HLC which > will be introduced. The default is set to custom timestamp(current way of > usage of timestamp). default unset timestamp is also custom timestamp as it > should be so. The default timestamp will be changed to HLC when HLC feature > is introduced completely in HBase. > Check HBASE-16210.master.6.patch. > Suggestions are welcome. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16234) Expect and handle nulls when assigning replicas
Harsh J created HBASE-16234: --- Summary: Expect and handle nulls when assigning replicas Key: HBASE-16234 URL: https://issues.apache.org/jira/browse/HBASE-16234 Project: HBase Issue Type: Bug Components: Region Assignment Affects Versions: 1.2.0 Reporter: Harsh J Observed this on a cluster: {code} FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. java.lang.NullPointerException at org.apache.hadoop.hbase.master.AssignmentManager.replicaRegionsNotRecordedInMeta(AssignmentManager.java:2799) at org.apache.hadoop.hbase.master.AssignmentManager.assignAllUserRegions(AssignmentManager.java:2778) at org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:638) at org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:485) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:723) at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169) at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1481) at java.lang.Thread.run(Thread.java:745) {code} It looks like {{FSTableDescriptors#get(…)}} can be expected to return null in some cases, but {{AssignmentManager.replicaRegionsNotRecordedInMeta(…)}} does not currently have any handling for such a possibility. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14552) Procedure V2 - Reimplement DispatchMergingRegionHandler
[ https://issues.apache.org/jira/browse/HBASE-14552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378847#comment-15378847 ] Stephen Yuan Jiang commented on HBASE-14552: V2 patch fixed the test failures. > Procedure V2 - Reimplement DispatchMergingRegionHandler > --- > > Key: HBASE-14552 > URL: https://issues.apache.org/jira/browse/HBASE-14552 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Stephen Yuan Jiang > Attachments: HBASE-14552.v0-master.patch, > HBASE-14552.v1-master.patch, HBASE-14552.v2-master.patch > > > use the proc-v2 state machine for DispatchMergingRegionHandler. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14552) Procedure V2 - Reimplement DispatchMergingRegionHandler
[ https://issues.apache.org/jira/browse/HBASE-14552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-14552: --- Attachment: HBASE-14552.v2-master.patch > Procedure V2 - Reimplement DispatchMergingRegionHandler > --- > > Key: HBASE-14552 > URL: https://issues.apache.org/jira/browse/HBASE-14552 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Stephen Yuan Jiang > Attachments: HBASE-14552.v0-master.patch, > HBASE-14552.v1-master.patch, HBASE-14552.v2-master.patch > > > use the proc-v2 state machine for DispatchMergingRegionHandler. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16233) Procedure V2: Support acquire/release shared table lock concurrently
[ https://issues.apache.org/jira/browse/HBASE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-16233: --- Attachment: (was: HBASE-16233.v1-master.patch) > Procedure V2: Support acquire/release shared table lock concurrently > > > Key: HBASE-16233 > URL: https://issues.apache.org/jira/browse/HBASE-16233 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Fix For: 2.0.0 > > Attachments: HBASE-16233.v1-master.patch > > > {{MasterProcedureScheduler.TableQueue}} class only has one single instance of > TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared > table lock from TableLockManager. > When multiple shared lock request comes, the later shared lock request would > overwrite the lock acquired from earlier shared lock request, and hence, we > will get some weird error when the second or later release lock request > comes, because we lose track of the lock. > The issue can be reproduced in the unit test of HBASE-14552. [~mbertozzi] > also comes up with a UT without using any real procedure to repro the problem: > {code} > @Test > public void testSchedWithZkLock() throws Exception { > MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf); > int zkPort = zkCluster.startup(new File("/tmp/test-zk")); > Thread.sleep(1); > conf.set("hbase.zookeeper.quorum", "localhost:" + zkPort); > ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testSchedWithZkLock", > null, false); > queue = new MasterProcedureScheduler(conf, > TableLockManager.createTableLockManager( > conf, zkw, ServerName.valueOf("localhost", 12345, 1))); > final TableName tableName = TableName.valueOf("testtb"); > TestTableProcedure procA = new TestTableProcedure(1, tableName, > TableProcedureInterface.TableOperationType.READ); > TestTableProcedure procB = new TestTableProcedure(2, tableName, > TableProcedureInterface.TableOperationType.READ); > assertTrue(queue.tryAcquireTableSharedLock(procA, tableName)); > assertTrue(queue.tryAcquireTableSharedLock(procB, tableName)); > queue.releaseTableSharedLock(procA, tableName); > queue.releaseTableSharedLock(procB, tableName); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16233) Procedure V2: Support acquire/release shared table lock concurrently
[ https://issues.apache.org/jira/browse/HBASE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-16233: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Procedure V2: Support acquire/release shared table lock concurrently > > > Key: HBASE-16233 > URL: https://issues.apache.org/jira/browse/HBASE-16233 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Fix For: 2.0.0 > > Attachments: HBASE-16233.v1-master.patch > > > {{MasterProcedureScheduler.TableQueue}} class only has one single instance of > TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared > table lock from TableLockManager. > When multiple shared lock request comes, the later shared lock request would > overwrite the lock acquired from earlier shared lock request, and hence, we > will get some weird error when the second or later release lock request > comes, because we lose track of the lock. > The issue can be reproduced in the unit test of HBASE-14552. [~mbertozzi] > also comes up with a UT without using any real procedure to repro the problem: > {code} > @Test > public void testSchedWithZkLock() throws Exception { > MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf); > int zkPort = zkCluster.startup(new File("/tmp/test-zk")); > Thread.sleep(1); > conf.set("hbase.zookeeper.quorum", "localhost:" + zkPort); > ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testSchedWithZkLock", > null, false); > queue = new MasterProcedureScheduler(conf, > TableLockManager.createTableLockManager( > conf, zkw, ServerName.valueOf("localhost", 12345, 1))); > final TableName tableName = TableName.valueOf("testtb"); > TestTableProcedure procA = new TestTableProcedure(1, tableName, > TableProcedureInterface.TableOperationType.READ); > TestTableProcedure procB = new TestTableProcedure(2, tableName, > TableProcedureInterface.TableOperationType.READ); > assertTrue(queue.tryAcquireTableSharedLock(procA, tableName)); > assertTrue(queue.tryAcquireTableSharedLock(procB, tableName)); > queue.releaseTableSharedLock(procA, tableName); > queue.releaseTableSharedLock(procB, tableName); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16233) Procedure V2: Support acquire/release shared table lock concurrently
[ https://issues.apache.org/jira/browse/HBASE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-16233: --- Attachment: HBASE-16233.v1-master.patch > Procedure V2: Support acquire/release shared table lock concurrently > > > Key: HBASE-16233 > URL: https://issues.apache.org/jira/browse/HBASE-16233 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Fix For: 2.0.0 > > Attachments: HBASE-16233.v1-master.patch > > > {{MasterProcedureScheduler.TableQueue}} class only has one single instance of > TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared > table lock from TableLockManager. > When multiple shared lock request comes, the later shared lock request would > overwrite the lock acquired from earlier shared lock request, and hence, we > will get some weird error when the second or later release lock request > comes, because we lose track of the lock. > The issue can be reproduced in the unit test of HBASE-14552. [~mbertozzi] > also comes up with a UT without using any real procedure to repro the problem: > {code} > @Test > public void testSchedWithZkLock() throws Exception { > MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf); > int zkPort = zkCluster.startup(new File("/tmp/test-zk")); > Thread.sleep(1); > conf.set("hbase.zookeeper.quorum", "localhost:" + zkPort); > ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testSchedWithZkLock", > null, false); > queue = new MasterProcedureScheduler(conf, > TableLockManager.createTableLockManager( > conf, zkw, ServerName.valueOf("localhost", 12345, 1))); > final TableName tableName = TableName.valueOf("testtb"); > TestTableProcedure procA = new TestTableProcedure(1, tableName, > TableProcedureInterface.TableOperationType.READ); > TestTableProcedure procB = new TestTableProcedure(2, tableName, > TableProcedureInterface.TableOperationType.READ); > assertTrue(queue.tryAcquireTableSharedLock(procA, tableName)); > assertTrue(queue.tryAcquireTableSharedLock(procB, tableName)); > queue.releaseTableSharedLock(procA, tableName); > queue.releaseTableSharedLock(procB, tableName); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16233) Procedure V2: Support acquire/release shared table lock concurrently
[ https://issues.apache.org/jira/browse/HBASE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378834#comment-15378834 ] Stephen Yuan Jiang commented on HBASE-16233: The {{TEST-org.apache.hadoop.hbase.TestAcidGuarantees.xml.}} test failure has nothing to with this patch. > Procedure V2: Support acquire/release shared table lock concurrently > > > Key: HBASE-16233 > URL: https://issues.apache.org/jira/browse/HBASE-16233 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Fix For: 2.0.0 > > Attachments: HBASE-16233.v1-master.patch > > > {{MasterProcedureScheduler.TableQueue}} class only has one single instance of > TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared > table lock from TableLockManager. > When multiple shared lock request comes, the later shared lock request would > overwrite the lock acquired from earlier shared lock request, and hence, we > will get some weird error when the second or later release lock request > comes, because we lose track of the lock. > The issue can be reproduced in the unit test of HBASE-14552. [~mbertozzi] > also comes up with a UT without using any real procedure to repro the problem: > {code} > @Test > public void testSchedWithZkLock() throws Exception { > MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf); > int zkPort = zkCluster.startup(new File("/tmp/test-zk")); > Thread.sleep(1); > conf.set("hbase.zookeeper.quorum", "localhost:" + zkPort); > ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testSchedWithZkLock", > null, false); > queue = new MasterProcedureScheduler(conf, > TableLockManager.createTableLockManager( > conf, zkw, ServerName.valueOf("localhost", 12345, 1))); > final TableName tableName = TableName.valueOf("testtb"); > TestTableProcedure procA = new TestTableProcedure(1, tableName, > TableProcedureInterface.TableOperationType.READ); > TestTableProcedure procB = new TestTableProcedure(2, tableName, > TableProcedureInterface.TableOperationType.READ); > assertTrue(queue.tryAcquireTableSharedLock(procA, tableName)); > assertTrue(queue.tryAcquireTableSharedLock(procB, tableName)); > queue.releaseTableSharedLock(procA, tableName); > queue.releaseTableSharedLock(procB, tableName); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14070) Hybrid Logical Clocks for HBase
[ https://issues.apache.org/jira/browse/HBASE-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378828#comment-15378828 ] Sai Teja Ranuva commented on HBASE-14070: - [~enis] Again, Referring to description in the RB link. "Undo updates to meta with local clock" My understanding of why timestamps were assigned by the local clock in the first place was to guard against the possible network reordering of delete region, add region which might lead to add region to meta getting eclipsed. In case my understanding is right, why are we undoing the local timestamps updates to meta table? and does HLC help in this in any way ? > Hybrid Logical Clocks for HBase > --- > > Key: HBASE-14070 > URL: https://issues.apache.org/jira/browse/HBASE-14070 > Project: HBase > Issue Type: New Feature >Reporter: Enis Soztutar >Assignee: Sai Teja Ranuva > Attachments: HybridLogicalClocksforHBaseandPhoenix.docx, > HybridLogicalClocksforHBaseandPhoenix.pdf > > > HBase and Phoenix uses systems physical clock (PT) to give timestamps to > events (read and writes). This works mostly when the system clock is strictly > monotonically increasing and there is no cross-dependency between servers > clocks. However we know that leap seconds, general clock skew and clock drift > are in fact real. > This jira proposes using Hybrid Logical Clocks (HLC) as an implementation of > hybrid physical clock + a logical clock. HLC is best of both worlds where it > keeps causality relationship similar to logical clocks, but still is > compatible with NTP based physical system clock. HLC can be represented in > 64bits. > A design document is attached and also can be found here: > https://docs.google.com/document/d/1LL2GAodiYi0waBz5ODGL4LDT4e_bXy8P9h6kWC05Bhw/edit# -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16233) Procedure V2: Support acquire/release shared table lock concurrently
[ https://issues.apache.org/jira/browse/HBASE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378826#comment-15378826 ] Hadoop QA commented on HBASE-16233: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 25m 54s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 0s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 133m 7s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12818079/HBASE-16233.v1-master.patch | | JIRA Issue | HBASE-16233 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 243fe28 | | Default Java | 1.7.0_80 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 | | findbugs | v3.0.0 | | unit |
[jira] [Commented] (HBASE-15305) Fix a couple of incorrect anchors in HBase ref guide
[ https://issues.apache.org/jira/browse/HBASE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378803#comment-15378803 ] Hudson commented on HBASE-15305: FAILURE: Integrated in HBase-Trunk_matrix #1230 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1230/]) HBASE-15305 Fix a couple of incorrect anchors in HBase Ref Guide (mstanleyjones: rev 6462a615cb4405764d67599a4f269653d044c754) * src/main/asciidoc/_chapters/configuration.adoc * src/main/asciidoc/_chapters/architecture.adoc > Fix a couple of incorrect anchors in HBase ref guide > > > Key: HBASE-15305 > URL: https://issues.apache.org/jira/browse/HBASE-15305 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Misty Stanley-Jones >Assignee: Misty Stanley-Jones > Fix For: 2.0.0 > > Attachments: HBASE-15305-v2.patch > > > From HBASE-15298: > {quote} > After this patch is applied, there are still two missing asciidoc anchors, > distributed.log.splitting and fail.fast.expired.active.master. These are > related to features removed by HBASE-14053 and HBASE-10569. I think these > anchors(and related texts) should be handled by someone who knows those > issues well, so I retain them. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16183) Correct errors in example programs of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378804#comment-15378804 ] Hudson commented on HBASE-16183: FAILURE: Integrated in HBase-Trunk_matrix #1230 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1230/]) HBASE-16183: Correct errors in example programs of coprocessor in Ref (mstanleyjones: rev 12813c7f030168584b0a0de74f7cb6d6aa8e36d2) * src/main/asciidoc/_chapters/cp.adoc > Correct errors in example programs of coprocessor in Ref Guide > -- > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Labels: documentaion > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, > HBASE-16183.master.002.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16213) A new HFileBlock structure for fast random get
[ https://issues.apache.org/jira/browse/HBASE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378783#comment-15378783 ] binlijin commented on HBASE-16213: -- ROW_INDEX_V2 will store column family only once in a HFileBlock. > A new HFileBlock structure for fast random get > -- > > Key: HBASE-16213 > URL: https://issues.apache.org/jira/browse/HBASE-16213 > Project: HBase > Issue Type: New Feature >Reporter: binlijin > Attachments: HBASE-16213-master_v1.patch, HBASE-16213.patch, > HBASE-16213_v2.patch > > > HFileBlock store cells sequential, current when to get a row from the block, > it scan from the first cell until the row's cell. > The new structure store every row's start offset with data, so it can find > the exact row with binarySearch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16213) A new HFileBlock structure for fast random get
[ https://issues.apache.org/jira/browse/HBASE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378782#comment-15378782 ] binlijin commented on HBASE-16213: -- [~tedyu] org.apache.hadoop.hbase.io.encoding.TestEncodedSeekers org.apache.hadoop.hbase.io.encoding.TestChangingEncoding org.apache.hadoop.hbase.io.encoding.TestDataBlockEncoders org.apache.hadoop.hbase.io.encoding.TestSeekToBlockWithEncoders The four unit tests already test the ROW_INDEX_V1. For larger key/values(key=10B, value=1k), there is 10% improvements. > A new HFileBlock structure for fast random get > -- > > Key: HBASE-16213 > URL: https://issues.apache.org/jira/browse/HBASE-16213 > Project: HBase > Issue Type: New Feature >Reporter: binlijin > Attachments: HBASE-16213-master_v1.patch, HBASE-16213.patch, > HBASE-16213_v2.patch > > > HFileBlock store cells sequential, current when to get a row from the block, > it scan from the first cell until the row's cell. > The new structure store every row's start offset with data, so it can find > the exact row with binarySearch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16213) A new HFileBlock structure for fast random get
[ https://issues.apache.org/jira/browse/HBASE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378779#comment-15378779 ] binlijin commented on HBASE-16213: -- ROW_INDEX_V2 store column family only once, so the overhead will do not have so much, but current do not have this version for master. And the third version will store every row only once, and this version do not have implement now. > A new HFileBlock structure for fast random get > -- > > Key: HBASE-16213 > URL: https://issues.apache.org/jira/browse/HBASE-16213 > Project: HBase > Issue Type: New Feature >Reporter: binlijin > Attachments: HBASE-16213-master_v1.patch, HBASE-16213.patch, > HBASE-16213_v2.patch > > > HFileBlock store cells sequential, current when to get a row from the block, > it scan from the first cell until the row's cell. > The new structure store every row's start offset with data, so it can find > the exact row with binarySearch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16195) Should not add chunk into chunkQueue if not using chunk pool in HeapMemStoreLAB
[ https://issues.apache.org/jira/browse/HBASE-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378768#comment-15378768 ] Yu Li commented on HBASE-16195: --- Checked the failed case {{TestSnapshotCloneIndependence}} and should be irrelative to this commit, also confirmed test could pass in local run. > Should not add chunk into chunkQueue if not using chunk pool in > HeapMemStoreLAB > --- > > Key: HBASE-16195 > URL: https://issues.apache.org/jira/browse/HBASE-16195 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.1.5, 1.2.2, 0.98.20 >Reporter: Yu Li >Assignee: Yu Li > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 0.98.21, 1.2.3 > > Attachments: HBASE-16195.patch, HBASE-16195_v2.patch, > HBASE-16195_v3.patch, HBASE-16195_v4.patch, HBASE-16195_v4.patch > > > Problem description and analysis please refer to HBASE-16193 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14479) Apply the Leader/Followers pattern to RpcServer's Reader
[ https://issues.apache.org/jira/browse/HBASE-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378767#comment-15378767 ] Hiroshi Ikeda commented on HBASE-14479: --- In order to reduce overhead of unnecessarily changing registrations, we should postpone making the read flag on and delegating the leader to exclusive access to the socket, until we realize we cannot construct a task even after retrieving data from the socket. Additionally, we should retrieve data with a off-heap buffer whose size is equal to or larger than the socket's native buffer, for reducing overhead of native calls. Of course retrieving all available tasks from a socket at a time causes memory shortage and unfair execution as to connections. In order to prevent the unfairness, we should queue at most one task per connection. That doesn't mean that one connection cannot execute multiple tasks simultaneously; The restriction is for queued tasks waiting execution. From a different viewpoint, just before executing a task, we should delegate another follower to execute or queue the next task or delegate the leader, as described above. AdaptiveLifoCoDelCallQueue is not appropriate when clients can simultaneously send multiple requests. Because it is not realistic to retrieve all requests, the requests will be executed in available order when congestion. Moreover, that requests will be unfairly executed prior to others because that are retrieved later. > Apply the Leader/Followers pattern to RpcServer's Reader > > > Key: HBASE-14479 > URL: https://issues.apache.org/jira/browse/HBASE-14479 > Project: HBase > Issue Type: Improvement > Components: IPC/RPC, Performance >Reporter: Hiroshi Ikeda >Assignee: Hiroshi Ikeda >Priority: Minor > Attachments: HBASE-14479-V2 (1).patch, HBASE-14479-V2.patch, > HBASE-14479-V2.patch, HBASE-14479.patch, flamegraph-19152.svg, > flamegraph-32667.svg, gc.png, gets.png, io.png, median.png > > > {{RpcServer}} uses multiple selectors to read data for load distribution, but > the distribution is just done by round-robin. It is uncertain, especially for > long run, whether load is equally divided and resources are used without > being wasted. > Moreover, multiple selectors may cause excessive context switches which give > priority to low latency (while we just add the requests to queues), and it is > possible to reduce throughput of the whole server. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16233) Procedure V2: Support acquire/release shared table lock concurrently
[ https://issues.apache.org/jira/browse/HBASE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378751#comment-15378751 ] Matteo Bertozzi commented on HBASE-16233: - on a second look, there is one little thing to add to the test to make sure we cleanup the mini zk cluster. {code} int zkPort = zkCluster.startup(new File(dir)); try { ... } finally { zkCluster.shutdown() } {code} you can fix it on commit, for me > Procedure V2: Support acquire/release shared table lock concurrently > > > Key: HBASE-16233 > URL: https://issues.apache.org/jira/browse/HBASE-16233 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Fix For: 2.0.0 > > Attachments: HBASE-16233.v1-master.patch > > > {{MasterProcedureScheduler.TableQueue}} class only has one single instance of > TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared > table lock from TableLockManager. > When multiple shared lock request comes, the later shared lock request would > overwrite the lock acquired from earlier shared lock request, and hence, we > will get some weird error when the second or later release lock request > comes, because we lose track of the lock. > The issue can be reproduced in the unit test of HBASE-14552. [~mbertozzi] > also comes up with a UT without using any real procedure to repro the problem: > {code} > @Test > public void testSchedWithZkLock() throws Exception { > MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf); > int zkPort = zkCluster.startup(new File("/tmp/test-zk")); > Thread.sleep(1); > conf.set("hbase.zookeeper.quorum", "localhost:" + zkPort); > ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testSchedWithZkLock", > null, false); > queue = new MasterProcedureScheduler(conf, > TableLockManager.createTableLockManager( > conf, zkw, ServerName.valueOf("localhost", 12345, 1))); > final TableName tableName = TableName.valueOf("testtb"); > TestTableProcedure procA = new TestTableProcedure(1, tableName, > TableProcedureInterface.TableOperationType.READ); > TestTableProcedure procB = new TestTableProcedure(2, tableName, > TableProcedureInterface.TableOperationType.READ); > assertTrue(queue.tryAcquireTableSharedLock(procA, tableName)); > assertTrue(queue.tryAcquireTableSharedLock(procB, tableName)); > queue.releaseTableSharedLock(procA, tableName); > queue.releaseTableSharedLock(procB, tableName); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16233) Procedure V2: Support acquire/release shared table lock concurrently
[ https://issues.apache.org/jira/browse/HBASE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-16233: Status: Patch Available (was: Open) > Procedure V2: Support acquire/release shared table lock concurrently > > > Key: HBASE-16233 > URL: https://issues.apache.org/jira/browse/HBASE-16233 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Fix For: 2.0.0 > > Attachments: HBASE-16233.v1-master.patch > > > {{MasterProcedureScheduler.TableQueue}} class only has one single instance of > TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared > table lock from TableLockManager. > When multiple shared lock request comes, the later shared lock request would > overwrite the lock acquired from earlier shared lock request, and hence, we > will get some weird error when the second or later release lock request > comes, because we lose track of the lock. > The issue can be reproduced in the unit test of HBASE-14552. [~mbertozzi] > also comes up with a UT without using any real procedure to repro the problem: > {code} > @Test > public void testSchedWithZkLock() throws Exception { > MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf); > int zkPort = zkCluster.startup(new File("/tmp/test-zk")); > Thread.sleep(1); > conf.set("hbase.zookeeper.quorum", "localhost:" + zkPort); > ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testSchedWithZkLock", > null, false); > queue = new MasterProcedureScheduler(conf, > TableLockManager.createTableLockManager( > conf, zkw, ServerName.valueOf("localhost", 12345, 1))); > final TableName tableName = TableName.valueOf("testtb"); > TestTableProcedure procA = new TestTableProcedure(1, tableName, > TableProcedureInterface.TableOperationType.READ); > TestTableProcedure procB = new TestTableProcedure(2, tableName, > TableProcedureInterface.TableOperationType.READ); > assertTrue(queue.tryAcquireTableSharedLock(procA, tableName)); > assertTrue(queue.tryAcquireTableSharedLock(procB, tableName)); > queue.releaseTableSharedLock(procA, tableName); > queue.releaseTableSharedLock(procB, tableName); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16219) Move meta bootstrap out of HMaster
[ https://issues.apache.org/jira/browse/HBASE-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378750#comment-15378750 ] Matteo Bertozzi commented on HBASE-16219: - thanks! > Move meta bootstrap out of HMaster > -- > > Key: HBASE-16219 > URL: https://issues.apache.org/jira/browse/HBASE-16219 > Project: HBase > Issue Type: Sub-task > Components: master, Region Assignment >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-16219-addendum.patch, HBASE-16219-v0.patch > > > another cleanup to have a smaller integration patch for the new AM. > Trying to isolate the Assignment code from the HMaster. > Move all the bootstrap code to split meta logs and assign meta regions from > HMaster to a MasterMetaBootstrap class to also reduce the long > finishActiveMasterInitialization() method -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16233) Procedure V2: Support acquire/release shared table lock concurrently
[ https://issues.apache.org/jira/browse/HBASE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378749#comment-15378749 ] Matteo Bertozzi commented on HBASE-16233: - +1 > Procedure V2: Support acquire/release shared table lock concurrently > > > Key: HBASE-16233 > URL: https://issues.apache.org/jira/browse/HBASE-16233 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Fix For: 2.0.0 > > Attachments: HBASE-16233.v1-master.patch > > > {{MasterProcedureScheduler.TableQueue}} class only has one single instance of > TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared > table lock from TableLockManager. > When multiple shared lock request comes, the later shared lock request would > overwrite the lock acquired from earlier shared lock request, and hence, we > will get some weird error when the second or later release lock request > comes, because we lose track of the lock. > The issue can be reproduced in the unit test of HBASE-14552. [~mbertozzi] > also comes up with a UT without using any real procedure to repro the problem: > {code} > @Test > public void testSchedWithZkLock() throws Exception { > MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf); > int zkPort = zkCluster.startup(new File("/tmp/test-zk")); > Thread.sleep(1); > conf.set("hbase.zookeeper.quorum", "localhost:" + zkPort); > ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testSchedWithZkLock", > null, false); > queue = new MasterProcedureScheduler(conf, > TableLockManager.createTableLockManager( > conf, zkw, ServerName.valueOf("localhost", 12345, 1))); > final TableName tableName = TableName.valueOf("testtb"); > TestTableProcedure procA = new TestTableProcedure(1, tableName, > TableProcedureInterface.TableOperationType.READ); > TestTableProcedure procB = new TestTableProcedure(2, tableName, > TableProcedureInterface.TableOperationType.READ); > assertTrue(queue.tryAcquireTableSharedLock(procA, tableName)); > assertTrue(queue.tryAcquireTableSharedLock(procB, tableName)); > queue.releaseTableSharedLock(procA, tableName); > queue.releaseTableSharedLock(procB, tableName); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-16219) Move meta bootstrap out of HMaster
[ https://issues.apache.org/jira/browse/HBASE-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-16219. --- Resolution: Fixed Pushed the addendum to master. > Move meta bootstrap out of HMaster > -- > > Key: HBASE-16219 > URL: https://issues.apache.org/jira/browse/HBASE-16219 > Project: HBase > Issue Type: Sub-task > Components: master, Region Assignment >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-16219-addendum.patch, HBASE-16219-v0.patch > > > another cleanup to have a smaller integration patch for the new AM. > Trying to isolate the Assignment code from the HMaster. > Move all the bootstrap code to split meta logs and assign meta regions from > HMaster to a MasterMetaBootstrap class to also reduce the long > finishActiveMasterInitialization() method -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16219) Move meta bootstrap out of HMaster
[ https://issues.apache.org/jira/browse/HBASE-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-16219: -- Attachment: HBASE-16219-addendum.patch > Move meta bootstrap out of HMaster > -- > > Key: HBASE-16219 > URL: https://issues.apache.org/jira/browse/HBASE-16219 > Project: HBase > Issue Type: Sub-task > Components: master, Region Assignment >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-16219-addendum.patch, HBASE-16219-v0.patch > > > another cleanup to have a smaller integration patch for the new AM. > Trying to isolate the Assignment code from the HMaster. > Move all the bootstrap code to split meta logs and assign meta regions from > HMaster to a MasterMetaBootstrap class to also reduce the long > finishActiveMasterInitialization() method -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16233) Procedure V2: Support acquire/release shared table lock concurrently
[ https://issues.apache.org/jira/browse/HBASE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378745#comment-15378745 ] Stephen Yuan Jiang commented on HBASE-16233: Several approaches were discussed with [~mbertozzi]: - Solution 1: as long as isSingleSharedLock() is false, we don't call zk to acquire shared lock, because one shared lock in zk is good enough; same as released the lock, if isSingleSharedLock() is false, no need to call zk to release the lock. Solution 1 looks like a hacked solution, but it should work and the change is simple (we do need to move the expensive zk lock acquire/release inside the synchronization block. - Note: we have already used isSingleSharedLock() to make decision on 'reset' parameter of release lock - I think there is a bug that acquire/release zk lock not inside synchronization block, because isSingleSharedLock() could change.) - Solution 2: make a 'private HashMaptableLock' to replace 'private TableLock tableLock' - now we track all locks. The draw back is that when exclusive lock is used, a little bit more overhead by looking at hash table. Solution 2 is more robust as we creates multiple shared lock znode to track each procedure. But it is a little complicated and really there is no need to over-complicate the part of code that might not exist in a long term. According to [~mbertozzi], [~Abby] is looking to removing the zklock in Apache HBASE 2.0. He would open a JIRA for the work. In the mean time, the V1 patch is using the proposal of Solution 1. > Procedure V2: Support acquire/release shared table lock concurrently > > > Key: HBASE-16233 > URL: https://issues.apache.org/jira/browse/HBASE-16233 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Fix For: 2.0.0 > > Attachments: HBASE-16233.v1-master.patch > > > {{MasterProcedureScheduler.TableQueue}} class only has one single instance of > TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared > table lock from TableLockManager. > When multiple shared lock request comes, the later shared lock request would > overwrite the lock acquired from earlier shared lock request, and hence, we > will get some weird error when the second or later release lock request > comes, because we lose track of the lock. > The issue can be reproduced in the unit test of HBASE-14552. [~mbertozzi] > also comes up with a UT without using any real procedure to repro the problem: > {code} > @Test > public void testSchedWithZkLock() throws Exception { > MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf); > int zkPort = zkCluster.startup(new File("/tmp/test-zk")); > Thread.sleep(1); > conf.set("hbase.zookeeper.quorum", "localhost:" + zkPort); > ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testSchedWithZkLock", > null, false); > queue = new MasterProcedureScheduler(conf, > TableLockManager.createTableLockManager( > conf, zkw, ServerName.valueOf("localhost", 12345, 1))); > final TableName tableName = TableName.valueOf("testtb"); > TestTableProcedure procA = new TestTableProcedure(1, tableName, > TableProcedureInterface.TableOperationType.READ); > TestTableProcedure procB = new TestTableProcedure(2, tableName, > TableProcedureInterface.TableOperationType.READ); > assertTrue(queue.tryAcquireTableSharedLock(procA, tableName)); > assertTrue(queue.tryAcquireTableSharedLock(procB, tableName)); > queue.releaseTableSharedLock(procA, tableName); > queue.releaseTableSharedLock(procB, tableName); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-16219) Move meta bootstrap out of HMaster
[ https://issues.apache.org/jira/browse/HBASE-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reopened HBASE-16219: --- {{MasterMetaBootstrap}} should be placed under o.a.h.h.master I think? I do not know why mvn install could pass but eclipse does report a compile error... > Move meta bootstrap out of HMaster > -- > > Key: HBASE-16219 > URL: https://issues.apache.org/jira/browse/HBASE-16219 > Project: HBase > Issue Type: Sub-task > Components: master, Region Assignment >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-16219-v0.patch > > > another cleanup to have a smaller integration patch for the new AM. > Trying to isolate the Assignment code from the HMaster. > Move all the bootstrap code to split meta logs and assign meta regions from > HMaster to a MasterMetaBootstrap class to also reduce the long > finishActiveMasterInitialization() method -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16233) Procedure V2: Support acquire/release shared table lock concurrently
[ https://issues.apache.org/jira/browse/HBASE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-16233: --- Description: {{MasterProcedureScheduler.TableQueue}} class only has one single instance of TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared table lock from TableLockManager. When multiple shared lock request comes, the later shared lock request would overwrite the lock acquired from earlier shared lock request, and hence, we will get some weird error when the second or later release lock request comes, because we lose track of the lock. The issue can be reproduced in the unit test of HBASE-14552. [~mbertozzi] also comes up with a UT without using any real procedure to repro the problem: {code} @Test public void testSchedWithZkLock() throws Exception { MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf); int zkPort = zkCluster.startup(new File("/tmp/test-zk")); Thread.sleep(1); conf.set("hbase.zookeeper.quorum", "localhost:" + zkPort); ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testSchedWithZkLock", null, false); queue = new MasterProcedureScheduler(conf, TableLockManager.createTableLockManager( conf, zkw, ServerName.valueOf("localhost", 12345, 1))); final TableName tableName = TableName.valueOf("testtb"); TestTableProcedure procA = new TestTableProcedure(1, tableName, TableProcedureInterface.TableOperationType.READ); TestTableProcedure procB = new TestTableProcedure(2, tableName, TableProcedureInterface.TableOperationType.READ); assertTrue(queue.tryAcquireTableSharedLock(procA, tableName)); assertTrue(queue.tryAcquireTableSharedLock(procB, tableName)); queue.releaseTableSharedLock(procA, tableName); queue.releaseTableSharedLock(procB, tableName); } {code} was: {{MasterProcedureScheduler.TableQueue}} class only has one single instance of TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared table lock from TableLockManager. When multiple shared lock request comes, the later shared lock request would overwrite the lock acquired from earlier shared lock request, and hence, we will get some weird error when the second or later release lock request comes, because we lose track of the lock. > Procedure V2: Support acquire/release shared table lock concurrently > > > Key: HBASE-16233 > URL: https://issues.apache.org/jira/browse/HBASE-16233 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Fix For: 2.0.0 > > Attachments: HBASE-16233.v1-master.patch > > > {{MasterProcedureScheduler.TableQueue}} class only has one single instance of > TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared > table lock from TableLockManager. > When multiple shared lock request comes, the later shared lock request would > overwrite the lock acquired from earlier shared lock request, and hence, we > will get some weird error when the second or later release lock request > comes, because we lose track of the lock. > The issue can be reproduced in the unit test of HBASE-14552. [~mbertozzi] > also comes up with a UT without using any real procedure to repro the problem: > {code} > @Test > public void testSchedWithZkLock() throws Exception { > MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf); > int zkPort = zkCluster.startup(new File("/tmp/test-zk")); > Thread.sleep(1); > conf.set("hbase.zookeeper.quorum", "localhost:" + zkPort); > ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testSchedWithZkLock", > null, false); > queue = new MasterProcedureScheduler(conf, > TableLockManager.createTableLockManager( > conf, zkw, ServerName.valueOf("localhost", 12345, 1))); > final TableName tableName = TableName.valueOf("testtb"); > TestTableProcedure procA = new TestTableProcedure(1, tableName, > TableProcedureInterface.TableOperationType.READ); > TestTableProcedure procB = new TestTableProcedure(2, tableName, > TableProcedureInterface.TableOperationType.READ); > assertTrue(queue.tryAcquireTableSharedLock(procA, tableName)); > assertTrue(queue.tryAcquireTableSharedLock(procB, tableName)); > queue.releaseTableSharedLock(procA, tableName); > queue.releaseTableSharedLock(procB, tableName); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16233) Procedure V2: Support acquire/release shared table lock concurrently
[ https://issues.apache.org/jira/browse/HBASE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-16233: --- Attachment: HBASE-16233.v1-master.patch > Procedure V2: Support acquire/release shared table lock concurrently > > > Key: HBASE-16233 > URL: https://issues.apache.org/jira/browse/HBASE-16233 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Fix For: 2.0.0 > > Attachments: HBASE-16233.v1-master.patch > > > {{MasterProcedureScheduler.TableQueue}} class only has one single instance of > TableLock ({{private TableLock tableLock = null;}}) to track exclusive/shared > table lock from TableLockManager. > When multiple shared lock request comes, the later shared lock request would > overwrite the lock acquired from earlier shared lock request, and hence, we > will get some weird error when the second or later release lock request > comes, because we lose track of the lock. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely
[ https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378706#comment-15378706 ] Duo Zhang commented on HBASE-16144: --- OK... Will commit this evening if no objections. Thanks. > Replication queue's lock will live forever if RS acquiring the lock has died > prematurely > > > Key: HBASE-16144 > URL: https://issues.apache.org/jira/browse/HBASE-16144 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.1, 1.1.5, 0.98.20 >Reporter: Phil Yang >Assignee: Phil Yang > Attachments: HBASE-16144-0.98.v1.patch, > HBASE-16144-branch-1-v1.patch, HBASE-16144-branch-1-v2.patch, > HBASE-16144-branch-1.1-v1.patch, HBASE-16144-branch-1.1-v2.patch, > HBASE-16144-v1.patch, HBASE-16144-v2.patch, HBASE-16144-v3.patch, > HBASE-16144-v4.patch, HBASE-16144-v5.patch, HBASE-16144-v6.patch, > HBASE-16144-v6.patch > > > In default, we will use multi operation when we claimQueues from ZK. But if > we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy > nodes, finally clean old queue and the lock. > However, if the RS acquiring the lock crash before claimQueues done, the lock > will always be there and other RS can never claim the queue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16183) Correct errors in example programs of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378678#comment-15378678 ] Xiang Li commented on HBASE-16183: -- Thanks for your time and guidance, [~misty] and [~carp84]! Thanks [~jerryhe] for helping me on the permission issue! > Correct errors in example programs of coprocessor in Ref Guide > -- > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Labels: documentaion > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, > HBASE-16183.master.002.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15305) Fix a couple of incorrect anchors in HBase ref guide
[ https://issues.apache.org/jira/browse/HBASE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-15305: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks! > Fix a couple of incorrect anchors in HBase ref guide > > > Key: HBASE-15305 > URL: https://issues.apache.org/jira/browse/HBASE-15305 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Misty Stanley-Jones >Assignee: Misty Stanley-Jones > Fix For: 2.0.0 > > Attachments: HBASE-15305-v2.patch > > > From HBASE-15298: > {quote} > After this patch is applied, there are still two missing asciidoc anchors, > distributed.log.splitting and fail.fast.expired.active.master. These are > related to features removed by HBASE-14053 and HBASE-10569. I think these > anchors(and related texts) should be handled by someone who knows those > issues well, so I retain them. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16183) Correct errors in example programs of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-16183: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thank you! Pushed. > Correct errors in example programs of coprocessor in Ref Guide > -- > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Labels: documentaion > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, > HBASE-16183.master.002.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15335) Add composite key support in row key
[ https://issues.apache.org/jira/browse/HBASE-15335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378457#comment-15378457 ] Zhan Zhang commented on HBASE-15335: [~tedyu] The scaladoc warning seems to be false positive, as I didn't see the map of the comments of method apply in object HBaseTableCatalo > Add composite key support in row key > > > Key: HBASE-15335 > URL: https://issues.apache.org/jira/browse/HBASE-15335 > Project: HBase > Issue Type: Sub-task >Reporter: Zhan Zhang >Assignee: Zhan Zhang > Attachments: HBASE-15335-1.patch, HBASE-15335-2.patch, > HBASE-15335-3.patch, HBASE-15335-4.patch, HBASE-15335-5.patch, > HBASE-15335-6.patch, HBASE-15335-7.patch, HBASE-15335-8.patch, > HBASE-15335-9.patch > > > Add composite key filter support in the connector. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-16232) ITBLL fails on branch-1.3, now loosing actual keys
[ https://issues.apache.org/jira/browse/HBASE-16232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov reassigned HBASE-16232: --- Assignee: Mikhail Antonov > ITBLL fails on branch-1.3, now loosing actual keys > -- > > Key: HBASE-16232 > URL: https://issues.apache.org/jira/browse/HBASE-16232 > Project: HBase > Issue Type: Bug > Components: dataloss, integration tests >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov >Priority: Blocker > > So I'm running ITBLL off branch-1.3 on recent commit (after [~stack]'s fix > for fake keys showing up in the scans) with increased number of regions per > regionserver and seeing the following. > {quote} > $Verify$Counts > REFERENCED0 4,999,999,994 4,999,999,994 > UNDEFINED 0 3 3 > UNREFERENCED 0 3 3 > {quote} > So we're loosing some keys. This time those aren't fake: > {quote} > undef > \x89\x10\xE0\xBBx\xF1\xC4\xBAY`\xC4\xD77\x87\x84\x0F 0 1 1 > \x89\x11\x0F\xBA@\x0D8^\xAE \xB1\xCAh\xEB&\xE30 1 1 > \x89\x16waxv;\xB1\xE3Z\xE6"|\xFC\xBE\x9A 0 1 1 > unref > \x15\x1F*f\x92i6\x86\x1D\x8E\xB7\xE1\xC1=\x96\xEF 0 1 1 > \xF4G\xC6E\xD6\xF1\xAB\xB7\xDB\xC0\x94\xF2\xE7mN\xEC 0 1 1 > U\x0F'\x88\x106\x19\x1C\x87Y"\xF3\xE6\xC1\xC8\x15 > {quote} > Re-running verify step with CM off still shows this issue. Search tool > reports: > {quote} > Total > \x89\x11\x0F\xBA@\x0D8^\xAE \xB1\xCAh\xEB&\xE35 0 5 > \x89\x16waxv;\xB1\xE3Z\xE6"|\xFC\xBE\x9A 4 0 4 > CELL_WITH_MISSING_ROW 15 0 15 > {quote} > Will post more as I dig into. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16232) ITBLL fails on branch-1.3, now loosing actual keys
[ https://issues.apache.org/jira/browse/HBASE-16232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378450#comment-15378450 ] Mikhail Antonov commented on HBASE-16232: - I didn't not try this test config with 1.2.2 though, so not yet sure where the regression creeped in. > ITBLL fails on branch-1.3, now loosing actual keys > -- > > Key: HBASE-16232 > URL: https://issues.apache.org/jira/browse/HBASE-16232 > Project: HBase > Issue Type: Bug > Components: dataloss, integration tests >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov >Priority: Blocker > > So I'm running ITBLL off branch-1.3 on recent commit (after [~stack]'s fix > for fake keys showing up in the scans) with increased number of regions per > regionserver and seeing the following. > {quote} > $Verify$Counts > REFERENCED0 4,999,999,994 4,999,999,994 > UNDEFINED 0 3 3 > UNREFERENCED 0 3 3 > {quote} > So we're loosing some keys. This time those aren't fake: > {quote} > undef > \x89\x10\xE0\xBBx\xF1\xC4\xBAY`\xC4\xD77\x87\x84\x0F 0 1 1 > \x89\x11\x0F\xBA@\x0D8^\xAE \xB1\xCAh\xEB&\xE30 1 1 > \x89\x16waxv;\xB1\xE3Z\xE6"|\xFC\xBE\x9A 0 1 1 > unref > \x15\x1F*f\x92i6\x86\x1D\x8E\xB7\xE1\xC1=\x96\xEF 0 1 1 > \xF4G\xC6E\xD6\xF1\xAB\xB7\xDB\xC0\x94\xF2\xE7mN\xEC 0 1 1 > U\x0F'\x88\x106\x19\x1C\x87Y"\xF3\xE6\xC1\xC8\x15 > {quote} > Re-running verify step with CM off still shows this issue. Search tool > reports: > {quote} > Total > \x89\x11\x0F\xBA@\x0D8^\xAE \xB1\xCAh\xEB&\xE35 0 5 > \x89\x16waxv;\xB1\xE3Z\xE6"|\xFC\xBE\x9A 4 0 4 > CELL_WITH_MISSING_ROW 15 0 15 > {quote} > Will post more as I dig into. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16232) ITBLL fails on branch-1.3, now loosing actual keys
Mikhail Antonov created HBASE-16232: --- Summary: ITBLL fails on branch-1.3, now loosing actual keys Key: HBASE-16232 URL: https://issues.apache.org/jira/browse/HBASE-16232 Project: HBase Issue Type: Bug Components: dataloss, integration tests Affects Versions: 1.3.0 Reporter: Mikhail Antonov Priority: Blocker So I'm running ITBLL off branch-1.3 on recent commit (after [~stack]'s fix for fake keys showing up in the scans) with increased number of regions per regionserver and seeing the following. {quote} $Verify$Counts REFERENCED 0 4,999,999,994 4,999,999,994 UNDEFINED 0 3 3 UNREFERENCED0 3 3 {quote} So we're loosing some keys. This time those aren't fake: {quote} undef \x89\x10\xE0\xBBx\xF1\xC4\xBAY`\xC4\xD77\x87\x84\x0F0 1 1 \x89\x11\x0F\xBA@\x0D8^\xAE \xB1\xCAh\xEB&\xE3 0 1 1 \x89\x16waxv;\xB1\xE3Z\xE6"|\xFC\xBE\x9A0 1 1 unref \x15\x1F*f\x92i6\x86\x1D\x8E\xB7\xE1\xC1=\x96\xEF 0 1 1 \xF4G\xC6E\xD6\xF1\xAB\xB7\xDB\xC0\x94\xF2\xE7mN\xEC0 1 1 U\x0F'\x88\x106\x19\x1C\x87Y"\xF3\xE6\xC1\xC8\x15 {quote} Re-running verify step with CM off still shows this issue. Search tool reports: {quote} Total \x89\x11\x0F\xBA@\x0D8^\xAE \xB1\xCAh\xEB&\xE3 5 0 5 \x89\x16waxv;\xB1\xE3Z\xE6"|\xFC\xBE\x9A4 0 4 CELL_WITH_MISSING_ROW 15 0 15 {quote} Will post more as I dig into. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-14070) Hybrid Logical Clocks for HBase
[ https://issues.apache.org/jira/browse/HBASE-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378417#comment-15378417 ] Sai Teja Ranuva edited comment on HBASE-14070 at 7/14/16 9:26 PM: -- [~enis] Referring to description in RB link - "TTL works with HLC timestamps and SYSTEM timestamps". I feel TTL might not work well with HLC, as there could be a message with much greater PT than the current system time, say a second (but less than max delta we set), which can take the HLC physical time forward. If you read the time the HLC time before the message was received and after the message was received, the difference will be inflated by one second. Can you clarify this aspect ? was (Author: saitejar): [~enis] Referring to description in RB link - "TTL works with HLC timestamps and SYSTEM timestamps". I feel TTL might not work well with HLC, as there could be a message with much greater PT than the current system time, say a second (but less than max delta we set), which can take the HLC physical time forward. If you read the time the HLC time before the message was received and after the message was received, the difference will be inflated by one second. Can you clarity this aspect ? > Hybrid Logical Clocks for HBase > --- > > Key: HBASE-14070 > URL: https://issues.apache.org/jira/browse/HBASE-14070 > Project: HBase > Issue Type: New Feature >Reporter: Enis Soztutar >Assignee: Sai Teja Ranuva > Attachments: HybridLogicalClocksforHBaseandPhoenix.docx, > HybridLogicalClocksforHBaseandPhoenix.pdf > > > HBase and Phoenix uses systems physical clock (PT) to give timestamps to > events (read and writes). This works mostly when the system clock is strictly > monotonically increasing and there is no cross-dependency between servers > clocks. However we know that leap seconds, general clock skew and clock drift > are in fact real. > This jira proposes using Hybrid Logical Clocks (HLC) as an implementation of > hybrid physical clock + a logical clock. HLC is best of both worlds where it > keeps causality relationship similar to logical clocks, but still is > compatible with NTP based physical system clock. HLC can be represented in > 64bits. > A design document is attached and also can be found here: > https://docs.google.com/document/d/1LL2GAodiYi0waBz5ODGL4LDT4e_bXy8P9h6kWC05Bhw/edit# -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-16117) Fix Connection leak in mapred.TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-16117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378413#comment-15378413 ] Jonathan Hsieh edited comment on HBASE-16117 at 7/14/16 9:27 PM: - About the branch-1.001 patch version -- failure is related -- will dig into it. This patch undoes what the release not says is a problem, and now is essentially compatible except that it throws an exception if zk connection fails first). was (Author: jmhsieh): About the branch-1.001 patch version -- failure is unrelated. This patch undoes what the release not says is a problem, and now is essentially compatible except that it throws an exception if zk connection fails first). > Fix Connection leak in mapred.TableOutputFormat > > > Key: HBASE-16117 > URL: https://issues.apache.org/jira/browse/HBASE-16117 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 2.0.0, 1.3.0, 1.2.2, 1.1.6 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0, 1.1.6, 1.3.1, 1.2.3 > > Attachments: HBASE-16117.branch-1.001.patch, > hbase-16117.branch-1.patch, hbase-16117.patch, hbase-16117.v2.branch-1.patch, > hbase-16117.v2.patch, hbase-16117.v3.branch-1.patch, hbase-16117.v3.patch, > hbase-16117.v4.patch > > > Spark seems to instantiate multiple instances of output formats within a > single process. When mapred.TableOutputFormat (not > mapreduce.TableOutputFormat) is used, this may cause connection leaks that > slowly exhaust the cluster's zk connections. > This patch fixes that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14070) Hybrid Logical Clocks for HBase
[ https://issues.apache.org/jira/browse/HBASE-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378417#comment-15378417 ] Sai Teja Ranuva commented on HBASE-14070: - [~enis] Referring to description in RB link - "TTL works with HLC timestamps and SYSTEM timestamps". I feel TTL might not work well with HLC, as there could be a message with much greater PT than the current system time, say a second (but less than max delta we set), which can take the HLC physical time forward. If you read the time the HLC time before the message was received and after the message was received, the difference will be inflated by one second. Can you clarity this aspect ? > Hybrid Logical Clocks for HBase > --- > > Key: HBASE-14070 > URL: https://issues.apache.org/jira/browse/HBASE-14070 > Project: HBase > Issue Type: New Feature >Reporter: Enis Soztutar >Assignee: Sai Teja Ranuva > Attachments: HybridLogicalClocksforHBaseandPhoenix.docx, > HybridLogicalClocksforHBaseandPhoenix.pdf > > > HBase and Phoenix uses systems physical clock (PT) to give timestamps to > events (read and writes). This works mostly when the system clock is strictly > monotonically increasing and there is no cross-dependency between servers > clocks. However we know that leap seconds, general clock skew and clock drift > are in fact real. > This jira proposes using Hybrid Logical Clocks (HLC) as an implementation of > hybrid physical clock + a logical clock. HLC is best of both worlds where it > keeps causality relationship similar to logical clocks, but still is > compatible with NTP based physical system clock. HLC can be represented in > 64bits. > A design document is attached and also can be found here: > https://docs.google.com/document/d/1LL2GAodiYi0waBz5ODGL4LDT4e_bXy8P9h6kWC05Bhw/edit# -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14070) Hybrid Logical Clocks for HBase
[ https://issues.apache.org/jira/browse/HBASE-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378409#comment-15378409 ] Sai Teja Ranuva commented on HBASE-14070: - Thank you for the clarification. > Hybrid Logical Clocks for HBase > --- > > Key: HBASE-14070 > URL: https://issues.apache.org/jira/browse/HBASE-14070 > Project: HBase > Issue Type: New Feature >Reporter: Enis Soztutar >Assignee: Sai Teja Ranuva > Attachments: HybridLogicalClocksforHBaseandPhoenix.docx, > HybridLogicalClocksforHBaseandPhoenix.pdf > > > HBase and Phoenix uses systems physical clock (PT) to give timestamps to > events (read and writes). This works mostly when the system clock is strictly > monotonically increasing and there is no cross-dependency between servers > clocks. However we know that leap seconds, general clock skew and clock drift > are in fact real. > This jira proposes using Hybrid Logical Clocks (HLC) as an implementation of > hybrid physical clock + a logical clock. HLC is best of both worlds where it > keeps causality relationship similar to logical clocks, but still is > compatible with NTP based physical system clock. HLC can be represented in > 64bits. > A design document is attached and also can be found here: > https://docs.google.com/document/d/1LL2GAodiYi0waBz5ODGL4LDT4e_bXy8P9h6kWC05Bhw/edit# -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16169: - Release Note: Added couple of API's to Admin.java: Returns region load map of all regions hosted on a region server MapgetRegionLoad(ServerName sn) throws IOException; Returns region load map of all regions of a table hosted on a region server Map getRegionLoad(ServerName sn, TableName tableName) throws IOException > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16230) Calling 'get' in hbase shell with table name that doesn't exist causes it to hang for long time
[ https://issues.apache.org/jira/browse/HBASE-16230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378316#comment-15378316 ] Konstantin Ryakhovskiy commented on HBASE-16230: I have written a simple straightforward test for that issue using mini cluster on branch-1.3, but {{nonExistingTable.get(get)}} throws a {{TableNotFoundException}} as expected. Could it be a configuration issue? > Calling 'get' in hbase shell with table name that doesn't exist causes it to > hang for long time > --- > > Key: HBASE-16230 > URL: https://issues.apache.org/jira/browse/HBASE-16230 > Project: HBase > Issue Type: Bug > Components: Client, shell >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov > > get 'table_that_doesnt_exist', 'x' > hangs for duration that looks more like rpc timeout, then says: > ERROR: HRegionInfo was null in -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14070) Hybrid Logical Clocks for HBase
[ https://issues.apache.org/jira/browse/HBASE-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378304#comment-15378304 ] Sai Teja Ranuva commented on HBASE-14070: - [~enis] Referring to the Description on RB link. Why are Meta Tables not HLC ? What is the reason for it ? > Hybrid Logical Clocks for HBase > --- > > Key: HBASE-14070 > URL: https://issues.apache.org/jira/browse/HBASE-14070 > Project: HBase > Issue Type: New Feature >Reporter: Enis Soztutar >Assignee: Sai Teja Ranuva > Attachments: HybridLogicalClocksforHBaseandPhoenix.docx, > HybridLogicalClocksforHBaseandPhoenix.pdf > > > HBase and Phoenix uses systems physical clock (PT) to give timestamps to > events (read and writes). This works mostly when the system clock is strictly > monotonically increasing and there is no cross-dependency between servers > clocks. However we know that leap seconds, general clock skew and clock drift > are in fact real. > This jira proposes using Hybrid Logical Clocks (HLC) as an implementation of > hybrid physical clock + a logical clock. HLC is best of both worlds where it > keeps causality relationship similar to logical clocks, but still is > compatible with NTP based physical system clock. HLC can be represented in > 64bits. > A design document is attached and also can be found here: > https://docs.google.com/document/d/1LL2GAodiYi0waBz5ODGL4LDT4e_bXy8P9h6kWC05Bhw/edit# -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378289#comment-15378289 ] Hadoop QA commented on HBASE-16209: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 26s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 32m 14s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 12s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 158m 41s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster | | | hadoop.hbase.master.TestMasterStatusServlet | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12817975/HBASE-16209.patch | | JIRA Issue | HBASE-16209 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master /
[jira] [Updated] (HBASE-15866) Split hbase.rpc.timeout into *.read.timeout and *.write.timeout
[ https://issues.apache.org/jira/browse/HBASE-15866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-15866: --- Assignee: Vivek Koppuru > Split hbase.rpc.timeout into *.read.timeout and *.write.timeout > --- > > Key: HBASE-15866 > URL: https://issues.apache.org/jira/browse/HBASE-15866 > Project: HBase > Issue Type: Bug >Reporter: Andrew Purtell >Assignee: Vivek Koppuru > Fix For: 2.0.0 > > > We have a single tunable for the RPC timeout interval - hbase.rpc.timeout. > This is fine for the general case but there are use cases where it would be > advantageous to set two separate timeouts for reads (gets, scans, perhaps > with significant server side filtering - although the new scanner heartbeat > feature mitigates where available) and mutations (fail fast under tight SLA, > resubmit or take mitigating action). > I propose we refer to a configuration setting "hbase.rpc.read.timeout" when > handling read operations and "hbase.rpc.write.timeout" when handling write > operations. If those values are not set in the configuration, fall back to > the value of "hbase.rpc.timeout" or its default. > So for example in HTable instead of one global timeout for each RPC > (rpcTimeout), there would be a readRpcTimeout and writeRpcTimeout also set up > in HTable#finishSetup. Then wherever we set up RPC with > RpcRetryingCallerFactory#newCaller(int rpcTimeout) we pass in the read or > write timeout depending on what the op is. > In general I don't like the idea of adding configuration parameters to our > already heavyweight set, but I think the inability to control timeouts > separately for reads and writes is an operational deficit. > See also PHOENIX-2916. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.
[ https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378251#comment-15378251 ] Hadoop QA commented on HBASE-16210: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 20s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 58s {color} | {color:red} hbase-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 45s {color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 15s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-common | | | Public static org.apache.hadoop.hbase.Timestamp.getTimestamps() may expose internal representation by returning Timestamp.TIMESTAMPS At Timestamp.java:internal representation by returning Timestamp.TIMESTAMPS At Timestamp.java:[line 265] | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL |
[jira] [Resolved] (HBASE-9410) Concurrent coprocessor endpoint executions slow down exponentially
[ https://issues.apache.org/jira/browse/HBASE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-9410. --- Resolution: Incomplete > Concurrent coprocessor endpoint executions slow down exponentially > -- > > Key: HBASE-9410 > URL: https://issues.apache.org/jira/browse/HBASE-9410 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 0.94.11 > Environment: Amazon ec2 >Reporter: Kirubakaran Pakkirisamy > Attachments: Search.java, SearchEndpoint.java, SearchProtocol.java, > jstack.log, jstack1.log, jstack2.log, jstack3.log > > > Multiple concurrent executions of coprocessor endpoints slow down > drastically. It is compounded further when there are more Htable connection > setups happening. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15335) Add composite key support in row key
[ https://issues.apache.org/jira/browse/HBASE-15335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378156#comment-15378156 ] Hadoop QA commented on HBASE-15335: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 35s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} scalac {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} scalac {color} | {color:green} 1m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 26m 25s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} scaladoc {color} | {color:red} 1m 54s {color} | {color:red} hbase-spark generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} scaladoc {color} | {color:red} 2m 24s {color}
[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.
[ https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sai Teja Ranuva updated HBASE-16210: Status: Patch Available (was: Open) > Add Timestamp class to the hbase-common and Timestamp type to HTable. > - > > Key: HBASE-16210 > URL: https://issues.apache.org/jira/browse/HBASE-16210 > Project: HBase > Issue Type: Sub-task >Reporter: Sai Teja Ranuva >Assignee: Sai Teja Ranuva >Priority: Minor > Labels: patch, testing > Attachments: HBASE-16210.master.1.patch, HBASE-16210.master.2.patch, > HBASE-16210.master.3.patch, HBASE-16210.master.4.patch, > HBASE-16210.master.5.patch, HBASE-16210.master.6.patch, > HBASE-16210.master.7.patch > > > This is a sub-issue of > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is > a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. > The main idea of HLC is described in > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with > the motivation of adding it to HBase. > What is this patch/issue about ? > This issue attempts to add a timestamp class to hbase-common and timestamp > type to HTable. > This is a part of the attempt to get HLC into HBase. This patch does not > interfere with the current working of HBase. > Why Timestamp Class ? > Timestamp class can be as an abstraction to represent time in Hbase in 64 > bits. > It is just used for manipulating with the 64 bits of the timestamp and is not > concerned about the actual time. > There are three types of timestamps. System time, Custom and HLC. Each one of > it has methods to manipulate the 64 bits of timestamp. > HTable changes: Added a timestamp type property to HTable. This will help > HBase exist in conjunction with old type of timestamp and also the HLC which > will be introduced. The default is set to custom timestamp(current way of > usage of timestamp). default unset timestamp is also custom timestamp as it > should be so. The default timestamp will be changed to HLC when HLC feature > is introduced completely in HBase. > Check HBASE-16210.master.6.patch. > Suggestions are welcome. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.
[ https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sai Teja Ranuva updated HBASE-16210: Status: Open (was: Patch Available) > Add Timestamp class to the hbase-common and Timestamp type to HTable. > - > > Key: HBASE-16210 > URL: https://issues.apache.org/jira/browse/HBASE-16210 > Project: HBase > Issue Type: Sub-task >Reporter: Sai Teja Ranuva >Assignee: Sai Teja Ranuva >Priority: Minor > Labels: patch, testing > Attachments: HBASE-16210.master.1.patch, HBASE-16210.master.2.patch, > HBASE-16210.master.3.patch, HBASE-16210.master.4.patch, > HBASE-16210.master.5.patch, HBASE-16210.master.6.patch, > HBASE-16210.master.7.patch > > > This is a sub-issue of > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is > a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. > The main idea of HLC is described in > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with > the motivation of adding it to HBase. > What is this patch/issue about ? > This issue attempts to add a timestamp class to hbase-common and timestamp > type to HTable. > This is a part of the attempt to get HLC into HBase. This patch does not > interfere with the current working of HBase. > Why Timestamp Class ? > Timestamp class can be as an abstraction to represent time in Hbase in 64 > bits. > It is just used for manipulating with the 64 bits of the timestamp and is not > concerned about the actual time. > There are three types of timestamps. System time, Custom and HLC. Each one of > it has methods to manipulate the 64 bits of timestamp. > HTable changes: Added a timestamp type property to HTable. This will help > HBase exist in conjunction with old type of timestamp and also the HLC which > will be introduced. The default is set to custom timestamp(current way of > usage of timestamp). default unset timestamp is also custom timestamp as it > should be so. The default timestamp will be changed to HLC when HLC feature > is introduced completely in HBase. > Check HBASE-16210.master.6.patch. > Suggestions are welcome. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.
[ https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sai Teja Ranuva updated HBASE-16210: Attachment: HBASE-16210.master.7.patch Included changes suggested in the review board comments. > Add Timestamp class to the hbase-common and Timestamp type to HTable. > - > > Key: HBASE-16210 > URL: https://issues.apache.org/jira/browse/HBASE-16210 > Project: HBase > Issue Type: Sub-task >Reporter: Sai Teja Ranuva >Assignee: Sai Teja Ranuva >Priority: Minor > Labels: patch, testing > Attachments: HBASE-16210.master.1.patch, HBASE-16210.master.2.patch, > HBASE-16210.master.3.patch, HBASE-16210.master.4.patch, > HBASE-16210.master.5.patch, HBASE-16210.master.6.patch, > HBASE-16210.master.7.patch > > > This is a sub-issue of > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is > a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. > The main idea of HLC is described in > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with > the motivation of adding it to HBase. > What is this patch/issue about ? > This issue attempts to add a timestamp class to hbase-common and timestamp > type to HTable. > This is a part of the attempt to get HLC into HBase. This patch does not > interfere with the current working of HBase. > Why Timestamp Class ? > Timestamp class can be as an abstraction to represent time in Hbase in 64 > bits. > It is just used for manipulating with the 64 bits of the timestamp and is not > concerned about the actual time. > There are three types of timestamps. System time, Custom and HLC. Each one of > it has methods to manipulate the 64 bits of timestamp. > HTable changes: Added a timestamp type property to HTable. This will help > HBase exist in conjunction with old type of timestamp and also the HLC which > will be introduced. The default is set to custom timestamp(current way of > usage of timestamp). default unset timestamp is also custom timestamp as it > should be so. The default timestamp will be changed to HLC when HLC feature > is introduced completely in HBase. > Check HBASE-16210.master.6.patch. > Suggestions are welcome. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16183) Correct errors in example programs of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378122#comment-15378122 ] Hadoop QA commented on HBASE-16183: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 45s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 8s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 45s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 35s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 29m 50s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 40s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 111m 31s {color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 161m 37s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12817963/HBASE-16183.master.002.patch | | JIRA Issue | HBASE-16183 | | Optional Tests | asflicense javac javadoc unit | | uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / a55af38 | | Default Java | 1.7.0_80 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/2637/testReport/ | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/2637/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > Correct errors in example programs of coprocessor in Ref Guide > -- > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Labels: documentaion > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, > HBASE-16183.master.002.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-3727) MultiHFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yi liang updated HBASE-3727: Release Note: MultiHFileOutputFormat support output of HFiles from multiple tables. It will output directories and hfiles as follow, --table1 --family1 --family2 --Hfiles --table2 --family3 --hfiles --family4 family directory and its hfiles match the output of HFileOutputFormat2 was: MultiHFileOutputFormat will output directories and hfiles as follow, --tableDir1 --familyDir1 --familyDir2 --Hfiles --tableDir2 --familyDir3 --hfiles --familyDir4 create 3 level tree directory, first level is using table name as parent directory and then use column family name as child directory, and all related HFiles for one family are under column family directory. Except the table-level directory, the other two are followed hfileoutputformat2. There are only one major modification in HFileOutputFormat2: change the Anonymous Classes of return RecordWriter to a class called HFileRecordWriter extends RecordWriter. > MultiHFileOutputFormat > -- > > Key: HBASE-3727 > URL: https://issues.apache.org/jira/browse/HBASE-3727 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0 >Reporter: Andrew Purtell >Assignee: yi liang >Priority: Minor > Attachments: HBASE-3727-V3.patch, HBASE-3727-V4.patch, > HBASE-3727-V5.patch, MH2.patch, MultiHFileOutputFormat.java, > MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, > TestMultiHFileOutputFormat.java > > > Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an > IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on > demand that produce HFiles in per-table subdirectories of the configured > output path. Does not currently support partitioning for existing tables / > incremental update. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-3727) MultiHFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yi liang updated HBASE-3727: Release Note: MultiHFileOutputFormat will output directories and hfiles as follow, --tableDir1 --familyDir1 --familyDir2 --Hfiles --tableDir2 --familyDir3 --hfiles --familyDir4 create 3 level tree directory, first level is using table name as parent directory and then use column family name as child directory, and all related HFiles for one family are under column family directory. Except the table-level directory, the other two are followed hfileoutputformat2. There are only one major modification in HFileOutputFormat2: change the Anonymous Classes of return RecordWriter to a class called HFileRecordWriter extends RecordWriter. > MultiHFileOutputFormat > -- > > Key: HBASE-3727 > URL: https://issues.apache.org/jira/browse/HBASE-3727 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0 >Reporter: Andrew Purtell >Assignee: yi liang >Priority: Minor > Attachments: HBASE-3727-V3.patch, HBASE-3727-V4.patch, > HBASE-3727-V5.patch, MH2.patch, MultiHFileOutputFormat.java, > MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, > TestMultiHFileOutputFormat.java > > > Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an > IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on > demand that produce HFiles in per-table subdirectories of the configured > output path. Does not currently support partitioning for existing tables / > incremental update. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16213) A new HFileBlock structure for fast random get
[ https://issues.apache.org/jira/browse/HBASE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378053#comment-15378053 ] Anoop Sam John commented on HBASE-16213: May be u can consider key of 50 bytes size and value of 256 bytes. In such case #rows per HFileBlock will be lesser and so the gain may be less. > A new HFileBlock structure for fast random get > -- > > Key: HBASE-16213 > URL: https://issues.apache.org/jira/browse/HBASE-16213 > Project: HBase > Issue Type: New Feature >Reporter: binlijin > Attachments: HBASE-16213-master_v1.patch, HBASE-16213.patch, > HBASE-16213_v2.patch > > > HFileBlock store cells sequential, current when to get a row from the block, > it scan from the first cell until the row's cell. > The new structure store every row's start offset with data, so it can find > the exact row with binarySearch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15335) Add composite key support in row key
[ https://issues.apache.org/jira/browse/HBASE-15335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhan Zhang updated HBASE-15335: --- Attachment: HBASE-15335-9.patch > Add composite key support in row key > > > Key: HBASE-15335 > URL: https://issues.apache.org/jira/browse/HBASE-15335 > Project: HBase > Issue Type: Sub-task >Reporter: Zhan Zhang >Assignee: Zhan Zhang > Attachments: HBASE-15335-1.patch, HBASE-15335-2.patch, > HBASE-15335-3.patch, HBASE-15335-4.patch, HBASE-15335-5.patch, > HBASE-15335-6.patch, HBASE-15335-7.patch, HBASE-15335-8.patch, > HBASE-15335-9.patch > > > Add composite key filter support in the connector. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16213) A new HFileBlock structure for fast random get
[ https://issues.apache.org/jira/browse/HBASE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377886#comment-15377886 ] Ted Yu commented on HBASE-16213: Lijin: Can you add unit tests for ROW_INDEX_V1 ? In the comparison, both key and value are 10B in size. Can you perform comparison on larger key / values ? > A new HFileBlock structure for fast random get > -- > > Key: HBASE-16213 > URL: https://issues.apache.org/jira/browse/HBASE-16213 > Project: HBase > Issue Type: New Feature >Reporter: binlijin > Attachments: HBASE-16213-master_v1.patch, HBASE-16213.patch, > HBASE-16213_v2.patch > > > HFileBlock store cells sequential, current when to get a row from the block, > it scan from the first cell until the row's cell. > The new structure store every row's start offset with data, so it can find > the exact row with binarySearch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16198) Enhance backup history command
[ https://issues.apache.org/jira/browse/HBASE-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377623#comment-15377623 ] Ted Yu commented on HBASE-16198: {code} 415 return TableName.valueOf(value); 416 } catch (Exception e){ {code} Catching IllegalArgumentException should be enough. For BackupAdmin.java : {code} 111* @param naem - table's name {code} naem: typo {code} 146 List history = table.getBackupHistory(); {code} Can variant of getBackupHistory() be added which takes TableName as parameter ? This would limit the amount of data retrieved. > Enhance backup history command > -- > > Key: HBASE-16198 > URL: https://issues.apache.org/jira/browse/HBASE-16198 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0 >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-16198-v1.patch > > > We need history per table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Status: Patch Available (was: Open) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-16224) Reduce the number of RPCs for the large PUTs
[ https://issues.apache.org/jira/browse/HBASE-16224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-16224: -- Assignee: ChiaPing Tsai (was: Konstantin Ryakhovskiy) > Reduce the number of RPCs for the large PUTs > > > Key: HBASE-16224 > URL: https://issues.apache.org/jira/browse/HBASE-16224 > Project: HBase > Issue Type: Improvement >Reporter: ChiaPing Tsai >Assignee: ChiaPing Tsai >Priority: Minor > Attachments: HBASE-16224-v1.patch, HBASE-16224-v2.patch, > HBASE-16224-v3.patch > > > This patch is proposed to reduce the number of RPC for the large PUTs > The number and data size of write thread(SingleServerRequestRunnable) is a > result of three main factors: > 1) The flush size taken by BufferedMutatorImpl#backgroundFlushCommits > 2) The limit of task number > 3) ClientBackoffPolicy > A lot of threads created with less MUTATIONs is a result of two reason: 1) > many regions of target table are in different server. 2) flush size in step > one is summed by “all” server rather than “individual” server > This patch removes the limit of flush size in step one and add maximum size > to submit for each server in the AsyncProcess -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: HBASE-16209.patch > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: HBASE-16209.patch > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16195) Should not add chunk into chunkQueue if not using chunk pool in HeapMemStoreLAB
[ https://issues.apache.org/jira/browse/HBASE-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377326#comment-15377326 ] Hudson commented on HBASE-16195: FAILURE: Integrated in HBase-0.98-matrix #370 (See [https://builds.apache.org/job/HBase-0.98-matrix/370/]) HBASE-16195 Should not add chunk into chunkQueue if not using chunk pool (liyu: rev 1cfde5c5c1f44196d71813056af77be7c988ec8f) * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreLAB.java > Should not add chunk into chunkQueue if not using chunk pool in > HeapMemStoreLAB > --- > > Key: HBASE-16195 > URL: https://issues.apache.org/jira/browse/HBASE-16195 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.1.5, 1.2.2, 0.98.20 >Reporter: Yu Li >Assignee: Yu Li > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 0.98.21, 1.2.3 > > Attachments: HBASE-16195.patch, HBASE-16195_v2.patch, > HBASE-16195_v3.patch, HBASE-16195_v4.patch, HBASE-16195_v4.patch > > > Problem description and analysis please refer to HBASE-16193 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool
[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377327#comment-15377327 ] Hudson commented on HBASE-16095: FAILURE: Integrated in HBase-0.98-matrix #370 (See [https://builds.apache.org/job/HBase-0.98-matrix/370/]) HBASE-16095 Add priority to TableDescriptor and priority region open (apurtell: rev ebe603f05c3669bb0aad8897e2cad8ddb56eea60) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * hbase-server/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java * hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionOpen.java * hbase-client/src/main/java/org/apache/hadoop/hbase/executor/EventType.java * hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * hbase-client/src/main/java/org/apache/hadoop/hbase/executor/ExecutorType.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenPriorityRegionHandler.java * hbase-server/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java > Add priority to TableDescriptor and priority region open thread pool > > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21 > > Attachments: HBASE-16095-0.98.patch, HBASE-16095-0.98.patch, > hbase-16095_v0.patch, hbase-16095_v1.patch, hbase-16095_v2.patch, > hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16195) Should not add chunk into chunkQueue if not using chunk pool in HeapMemStoreLAB
[ https://issues.apache.org/jira/browse/HBASE-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377320#comment-15377320 ] Hudson commented on HBASE-16195: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1242 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1242/]) HBASE-16195 Should not add chunk into chunkQueue if not using chunk pool (liyu: rev 1cfde5c5c1f44196d71813056af77be7c988ec8f) * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreLAB.java > Should not add chunk into chunkQueue if not using chunk pool in > HeapMemStoreLAB > --- > > Key: HBASE-16195 > URL: https://issues.apache.org/jira/browse/HBASE-16195 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.1.5, 1.2.2, 0.98.20 >Reporter: Yu Li >Assignee: Yu Li > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 0.98.21, 1.2.3 > > Attachments: HBASE-16195.patch, HBASE-16195_v2.patch, > HBASE-16195_v3.patch, HBASE-16195_v4.patch, HBASE-16195_v4.patch > > > Problem description and analysis please refer to HBASE-16193 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool
[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377321#comment-15377321 ] Hudson commented on HBASE-16095: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1242 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1242/]) HBASE-16095 Add priority to TableDescriptor and priority region open (apurtell: rev ebe603f05c3669bb0aad8897e2cad8ddb56eea60) * hbase-server/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java * hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * hbase-client/src/main/java/org/apache/hadoop/hbase/executor/EventType.java * hbase-client/src/main/java/org/apache/hadoop/hbase/executor/ExecutorType.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenPriorityRegionHandler.java * hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionOpen.java * hbase-server/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java > Add priority to TableDescriptor and priority region open thread pool > > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21 > > Attachments: HBASE-16095-0.98.patch, HBASE-16095-0.98.patch, > hbase-16095_v0.patch, hbase-16095_v1.patch, hbase-16095_v2.patch, > hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: (was: HBASE-16209.patch) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: (was: HBASE-16209.patch) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16183) Correct errors in example programs of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377234#comment-15377234 ] Yu Li commented on HBASE-16183: --- v2 lgtm and confirmed it could be applied by "git am" with a sign-off. > Correct errors in example programs of coprocessor in Ref Guide > -- > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Labels: documentaion > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, > HBASE-16183.master.002.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: (was: HBASE-16209.patch) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Status: Open (was: Patch Available) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: HBASE-16209.patch > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16183) Correct errors in example programs of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Li updated HBASE-16183: - Labels: documentaion (was: ) Status: Patch Available (was: Open) > Correct errors in example programs of coprocessor in Ref Guide > -- > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Labels: documentaion > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, > HBASE-16183.master.002.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16183) Correct errors in example programs of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377186#comment-15377186 ] Xiang Li commented on HBASE-16183: -- [~misty], thanks for reviewing my patch and pointing those out! really appreciate it! Patch v2 is generated by {{git format-patch}} and uploaded. Would you please review it at your most convenience? > Correct errors in example programs of coprocessor in Ref Guide > -- > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, > HBASE-16183.master.002.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16183) Correct errors in example programs of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Li updated HBASE-16183: - Attachment: HBASE-16183.master.002.patch > Correct errors in example programs of coprocessor in Ref Guide > -- > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, > HBASE-16183.master.002.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15305) Fix a couple of incorrect anchors in HBase ref guide
[ https://issues.apache.org/jira/browse/HBASE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377150#comment-15377150 ] Jonathan Hsieh commented on HBASE-15305: +1 lgtm. > Fix a couple of incorrect anchors in HBase ref guide > > > Key: HBASE-15305 > URL: https://issues.apache.org/jira/browse/HBASE-15305 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Misty Stanley-Jones >Assignee: Misty Stanley-Jones > Fix For: 2.0.0 > > Attachments: HBASE-15305-v2.patch > > > From HBASE-15298: > {quote} > After this patch is applied, there are still two missing asciidoc anchors, > distributed.log.splitting and fail.fast.expired.active.master. These are > related to features removed by HBASE-14053 and HBASE-10569. I think these > anchors(and related texts) should be handled by someone who knows those > issues well, so I retain them. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16219) Move meta bootstrap out of HMaster
[ https://issues.apache.org/jira/browse/HBASE-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377091#comment-15377091 ] Hudson commented on HBASE-16219: FAILURE: Integrated in HBase-Trunk_matrix #1227 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1227/]) HBASE-16219 Move meta bootstrap out of HMaster (matteo.bertozzi: rev a55af38689fbe273e716ebbf6191e9515986dbf3) * hbase-server/src/main/java/org/apache/hadoop/hbase/MasterMetaBootstrap.java * hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterNoCluster.java > Move meta bootstrap out of HMaster > -- > > Key: HBASE-16219 > URL: https://issues.apache.org/jira/browse/HBASE-16219 > Project: HBase > Issue Type: Sub-task > Components: master, Region Assignment >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-16219-v0.patch > > > another cleanup to have a smaller integration patch for the new AM. > Trying to isolate the Assignment code from the HMaster. > Move all the bootstrap code to split meta logs and assign meta regions from > HMaster to a MasterMetaBootstrap class to also reduce the long > finishActiveMasterInitialization() method -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16183) Correct errors in example programs of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376947#comment-15376947 ] Yu Li commented on HBASE-16183: --- Please reformat the patch following [~misty]'s comments, and we need at least another committer's +1 to commit this. Thanks. > Correct errors in example programs of coprocessor in Ref Guide > -- > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16219) Move meta bootstrap out of HMaster
[ https://issues.apache.org/jira/browse/HBASE-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-16219: Resolution: Fixed Status: Resolved (was: Patch Available) > Move meta bootstrap out of HMaster > -- > > Key: HBASE-16219 > URL: https://issues.apache.org/jira/browse/HBASE-16219 > Project: HBase > Issue Type: Sub-task > Components: master, Region Assignment >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-16219-v0.patch > > > another cleanup to have a smaller integration patch for the new AM. > Trying to isolate the Assignment code from the HMaster. > Move all the bootstrap code to split meta logs and assign meta regions from > HMaster to a MasterMetaBootstrap class to also reduce the long > finishActiveMasterInitialization() method -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16213) A new HFileBlock structure for fast random get
[ https://issues.apache.org/jira/browse/HBASE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376833#comment-15376833 ] Anoop Sam John commented on HBASE-16213: Thanks. So when we have per row size of say 256 bytes, we will have a 1 KB overhead of storing these offsets (considering 64 KB block size) So in places like calc bucket cache bucket sizes, user will have to consider this meta data overhead also. > A new HFileBlock structure for fast random get > -- > > Key: HBASE-16213 > URL: https://issues.apache.org/jira/browse/HBASE-16213 > Project: HBase > Issue Type: New Feature >Reporter: binlijin > Attachments: HBASE-16213-master_v1.patch, HBASE-16213.patch, > HBASE-16213_v2.patch > > > HFileBlock store cells sequential, current when to get a row from the block, > it scan from the first cell until the row's cell. > The new structure store every row's start offset with data, so it can find > the exact row with binarySearch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376803#comment-15376803 ] binlijin commented on HBASE-16205: -- Thanks [~anoop.hbase] > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376790#comment-15376790 ] Hadoop QA commented on HBASE-16169: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 52s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 55s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 30s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 26m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s {color} | {color:green} hbase-protocol in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 49s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} |
[jira] [Assigned] (HBASE-16224) Reduce the number of RPCs for the large PUTs
[ https://issues.apache.org/jira/browse/HBASE-16224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Ryakhovskiy reassigned HBASE-16224: -- Assignee: Konstantin Ryakhovskiy > Reduce the number of RPCs for the large PUTs > > > Key: HBASE-16224 > URL: https://issues.apache.org/jira/browse/HBASE-16224 > Project: HBase > Issue Type: Improvement >Reporter: ChiaPing Tsai >Assignee: Konstantin Ryakhovskiy >Priority: Minor > Attachments: HBASE-16224-v1.patch, HBASE-16224-v2.patch, > HBASE-16224-v3.patch > > > This patch is proposed to reduce the number of RPC for the large PUTs > The number and data size of write thread(SingleServerRequestRunnable) is a > result of three main factors: > 1) The flush size taken by BufferedMutatorImpl#backgroundFlushCommits > 2) The limit of task number > 3) ClientBackoffPolicy > A lot of threads created with less MUTATIONs is a result of two reason: 1) > many regions of target table are in different server. 2) flush size in step > one is summed by “all” server rather than “individual” server > This patch removes the limit of flush size in step one and add maximum size > to submit for each server in the AsyncProcess -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376704#comment-15376704 ] Anoop Sam John commented on HBASE-16205: This jira is applicable only for master. {quote} Two case: a request with two cell: cell1=200k, cell2=100k, cell1+cell2>256K, cell1 and cell2 do not copy to MSLAB. Three case: a request with two cell: cell1=300k, cell2=1k, cell1>256K, cell1 and cell2 do not copy to MSLAB. And the two and three case is what i am suggested. {quote} No per cell is what we copy to MSLAB. The request might contain cells corresponding to diff regions itself. So summing up the size of all cells may not be worth IMO. And here I assume that we will add the feature of read into BBPool in trunk soon. > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16189) [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers
[ https://issues.apache.org/jira/browse/HBASE-16189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376699#comment-15376699 ] Anoop Sam John commented on HBASE-16189: Patch looks good. CellComparator is marked private. In any version, we will be writing this class name in FFT and so better we mark it as LimitedPrivate with HBaseInterfaceAudience.CONFIG (?) - This is in trunk any way. Just asking. Pls add a ref to this Jira in code level comments > [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers > > > Key: HBASE-16189 > URL: https://issues.apache.org/jira/browse/HBASE-16189 > Project: HBase > Issue Type: Sub-task > Components: migration >Reporter: Enis Soztutar >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 1.4.0 > > Attachments: HBASE-16189.patch, HBASE-16189_branch-1.patch > > > HBASE-10800 added MetaCellComparator, which gets written to the HFile. 1.x > code does not have the new class, hence fails to open the regions. I did not > check whether this is only for meta or for regular tables as well. > {code} > Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem > reading HFile Trailer from file > hdfs://cn017.l42scl.hortonworks.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/aa96e4ef463b4a82956330b236440437 > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:483) > at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:511) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1123) > at > org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:267) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:409) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:512) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:687) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:130) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:554) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:551) > ... 6 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.CellComparator$MetaCellComparator > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:581) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserializeFromPB(FixedFileTrailer.java:300) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserialize(FixedFileTrailer.java:242) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:407) > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:468) > ... 15 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.CellComparator$MetaCellComparator > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:579) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16183) Correct errors in example programs of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Li updated HBASE-16183: - Summary: Correct errors in example programs of coprocessor in Ref Guide (was: Correct errors in example program of coprocessor in Ref Guide) > Correct errors in example programs of coprocessor in Ref Guide > -- > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16230) Calling 'get' in hbase shell with table name that doesn't exist causes it to hang for long time
Mikhail Antonov created HBASE-16230: --- Summary: Calling 'get' in hbase shell with table name that doesn't exist causes it to hang for long time Key: HBASE-16230 URL: https://issues.apache.org/jira/browse/HBASE-16230 Project: HBase Issue Type: Bug Components: Client, shell Affects Versions: 1.3.0 Reporter: Mikhail Antonov get 'table_that_doesnt_exist', 'x' hangs for duration that looks more like rpc timeout, then says: ERROR: HRegionInfo was null in -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376622#comment-15376622 ] binlijin commented on HBASE-16205: -- Yes, i know it for the master version. > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376624#comment-15376624 ] binlijin commented on HBASE-16205: -- Yes, i know it for the master version. > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16189) [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers
[ https://issues.apache.org/jira/browse/HBASE-16189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376595#comment-15376595 ] Hadoop QA commented on HBASE-16189: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} HBASE-16189 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12817910/HBASE-16189_branch-1.patch | | JIRA Issue | HBASE-16189 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/2636/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers > > > Key: HBASE-16189 > URL: https://issues.apache.org/jira/browse/HBASE-16189 > Project: HBase > Issue Type: Sub-task > Components: migration >Reporter: Enis Soztutar >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 1.4.0 > > Attachments: HBASE-16189.patch, HBASE-16189_branch-1.patch > > > HBASE-10800 added MetaCellComparator, which gets written to the HFile. 1.x > code does not have the new class, hence fails to open the regions. I did not > check whether this is only for meta or for regular tables as well. > {code} > Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem > reading HFile Trailer from file > hdfs://cn017.l42scl.hortonworks.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/aa96e4ef463b4a82956330b236440437 > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:483) > at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:511) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1123) > at > org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:267) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:409) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:512) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:687) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:130) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:554) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:551) > ... 6 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.CellComparator$MetaCellComparator > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:581) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserializeFromPB(FixedFileTrailer.java:300) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserialize(FixedFileTrailer.java:242) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:407) > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:468) > ... 15 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.CellComparator$MetaCellComparator > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:579) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16189) [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers
[ https://issues.apache.org/jira/browse/HBASE-16189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-16189: --- Status: Patch Available (was: Open) > [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers > > > Key: HBASE-16189 > URL: https://issues.apache.org/jira/browse/HBASE-16189 > Project: HBase > Issue Type: Sub-task > Components: migration >Reporter: Enis Soztutar >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 1.4.0 > > Attachments: HBASE-16189.patch, HBASE-16189_branch-1.patch > > > HBASE-10800 added MetaCellComparator, which gets written to the HFile. 1.x > code does not have the new class, hence fails to open the regions. I did not > check whether this is only for meta or for regular tables as well. > {code} > Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem > reading HFile Trailer from file > hdfs://cn017.l42scl.hortonworks.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/aa96e4ef463b4a82956330b236440437 > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:483) > at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:511) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1123) > at > org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:267) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:409) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:512) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:687) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:130) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:554) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:551) > ... 6 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.CellComparator$MetaCellComparator > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:581) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserializeFromPB(FixedFileTrailer.java:300) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserialize(FixedFileTrailer.java:242) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:407) > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:468) > ... 15 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.CellComparator$MetaCellComparator > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:579) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16189) [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers
[ https://issues.apache.org/jira/browse/HBASE-16189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-16189: --- Attachment: HBASE-16189_branch-1.patch Renaming the patch for QA. > [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers > > > Key: HBASE-16189 > URL: https://issues.apache.org/jira/browse/HBASE-16189 > Project: HBase > Issue Type: Sub-task > Components: migration >Reporter: Enis Soztutar >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 1.4.0 > > Attachments: HBASE-16189.patch, HBASE-16189_branch-1.patch > > > HBASE-10800 added MetaCellComparator, which gets written to the HFile. 1.x > code does not have the new class, hence fails to open the regions. I did not > check whether this is only for meta or for regular tables as well. > {code} > Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem > reading HFile Trailer from file > hdfs://cn017.l42scl.hortonworks.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/aa96e4ef463b4a82956330b236440437 > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:483) > at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:511) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1123) > at > org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:267) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:409) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:512) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:687) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:130) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:554) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:551) > ... 6 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.CellComparator$MetaCellComparator > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:581) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserializeFromPB(FixedFileTrailer.java:300) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserialize(FixedFileTrailer.java:242) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:407) > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:468) > ... 15 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.CellComparator$MetaCellComparator > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:579) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16189) [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers
[ https://issues.apache.org/jira/browse/HBASE-16189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-16189: --- Attachment: HBASE-16189.patch [~enis] Can you test this patch? Not sure how to write a test case for this. But will see if it is possible. > [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers > > > Key: HBASE-16189 > URL: https://issues.apache.org/jira/browse/HBASE-16189 > Project: HBase > Issue Type: Sub-task > Components: migration >Reporter: Enis Soztutar >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 1.4.0 > > Attachments: HBASE-16189.patch > > > HBASE-10800 added MetaCellComparator, which gets written to the HFile. 1.x > code does not have the new class, hence fails to open the regions. I did not > check whether this is only for meta or for regular tables as well. > {code} > Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem > reading HFile Trailer from file > hdfs://cn017.l42scl.hortonworks.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/aa96e4ef463b4a82956330b236440437 > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:483) > at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:511) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1123) > at > org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:267) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:409) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:512) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:687) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:130) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:554) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:551) > ... 6 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.CellComparator$MetaCellComparator > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:581) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserializeFromPB(FixedFileTrailer.java:300) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserialize(FixedFileTrailer.java:242) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:407) > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:468) > ... 15 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.CellComparator$MetaCellComparator > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > at > org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:579) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16229) Cleaning up size and heapSize calculation
[ https://issues.apache.org/jira/browse/HBASE-16229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376590#comment-15376590 ] Anoop Sam John commented on HBASE-16229: This also looks odd MemstoreCompactor {code} private void doCompaction() { ImmutableSegment result = SegmentFactory.instance() // create the scanner .createImmutableSegment( compactingMemStore.getConfiguration(), compactingMemStore.getComparator(), CompactingMemStore.DEEP_OVERHEAD_PER_PIPELINE_ITEM); // the compaction processing try { // Phase I: create the compacted MutableCellSetSegment compactSegments(result); {code} 'result' is created as an ImmutableSegment and we will be adding cells to it. Within this method, we should have created a new MutableSegment and add to that and finally create an ImmutableSegment around the result. That would have looked clean. > Cleaning up size and heapSize calculation > - > > Key: HBASE-16229 > URL: https://issues.apache.org/jira/browse/HBASE-16229 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > > It is bit ugly now. For eg: > AbstractMemStore > {code} > public final static long FIXED_OVERHEAD = ClassSize.align( > ClassSize.OBJECT + > (4 * ClassSize.REFERENCE) + > (2 * Bytes.SIZEOF_LONG)); > public final static long DEEP_OVERHEAD = ClassSize.align(FIXED_OVERHEAD + > (ClassSize.ATOMIC_LONG + ClassSize.TIMERANGE_TRACKER + > ClassSize.CELL_SKIPLIST_SET + ClassSize.CONCURRENT_SKIPLISTMAP)); > {code} > We include the heap overhead of Segment also here. It will be better the > Segment contains its overhead part and the Memstore impl uses the heap size > of all of its segments to calculate its size. > Also this > {code} > public long heapSize() { > return getActive().getSize(); > } > {code} > HeapSize to consider all segment's size not just active's. I am not able to > see an override method in CompactingMemstore. > This jira tries to solve some of these. > When we create a Segment, we seems pass some initial heap size value to it. > Why? The segment object internally has to know what is its heap size not > like some one else dictate it. > More to add when doing this cleanup -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16169: - Attachment: HBASE-16169.master.004.patch > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376561#comment-15376561 ] Anoop Sam John commented on HBASE-16205: Case 2 : Both cells as such less than 256 KB and so both will be copied to MSLAB Case 3 : Cell1 size > 256 and so that wont get copied to MSLAB. It is not the sum of cell's size what is accounted. When the cell is not copied to MSLAB, we can not leave it as is and add to Memstore as the buffer where the req was read into go back to pool and get reused. So that is why the deep copy need arise. The copy happen to an byte[] on the run created. > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376525#comment-15376525 ] binlijin edited comment on HBASE-16205 at 7/14/16 8:03 AM: --- This is more imp along with HBASE-15788, so for 2.0 we should deep clone it, because the ByteBuffer will reuse for the coming request. was (Author: aoxiang): This is more imp along with HBASE-15788, so for 2.0 we should deep clone it. > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376523#comment-15376523 ] binlijin edited comment on HBASE-16205 at 7/14/16 7:59 AM: --- [~anoop.hbase], sir With MSLAB turn on. What i am suggest is: One case: a request with two cell: cell1=100k, cell2=100k, cell1+cell2<256K so cell1 and cell2 copy to MSLAB. Two case: a request with two cell: cell1=200k, cell2=100k, cell1+cell2>256K, cell1 and cell2 do not copy to MSLAB. Three case: a request with two cell: cell1=300k, cell2=1k, cell1>256K, cell1 and cell2 do not copy to MSLAB. And the two and three case is what i am suggested. was (Author: aoxiang): [~anoop.hbase], sir With MSLAB turn on. What i am suggest is: One case: a request with two cell: cell1=100k, cell2=100k, cell1+cell2<256K so cell1 and cell2 copy to MSLAB. Two case: a request with two cell: cell1=200k, cell2=100k, cell1+cell2>256K, cell1 and cell2 do not copy to MSLAB. Three case: a request with two cell: cell1=300k, cell2=1k, cell2>256K, cell1 and cell2 do not copy to MSLAB. And the two and three case is what i am suggested. > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376525#comment-15376525 ] binlijin commented on HBASE-16205: -- This is more imp along with HBASE-15788, so for 2.0 we should deep clone it. > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376523#comment-15376523 ] binlijin commented on HBASE-16205: -- [~anoop.hbase], sir With MSLAB turn on. What i am suggest is: One case: a request with two cell: cell1=100k, cell2=100k, cell1+cell2<256K so cell1 and cell2 copy to MSLAB. Two case: a request with two cell: cell1=200k, cell2=100k, cell1+cell2>256K, cell1 and cell2 do not copy to MSLAB. Three case: a request with two cell: cell1=300k, cell2=1k, cell2>256K, cell1 and cell2 do not copy to MSLAB. And the two and three case is what i am suggested. > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)