[jira] [Commented] (HBASE-17020) keylen in midkey() dont computed correctly
[ https://issues.apache.org/jira/browse/HBASE-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15659237#comment-15659237 ] Hudson commented on HBASE-17020: ABORTED: Integrated in Jenkins build HBase-0.98-matrix #414 (See [https://builds.apache.org/job/HBase-0.98-matrix/414/]) HBASE-17020 keylen in midkey() dont computed correctly (liyu: rev cf2cb620e6167c079add7b83efbcea1bed8dd7d3) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java > keylen in midkey() dont computed correctly > -- > > Key: HBASE-17020 > URL: https://issues.apache.org/jira/browse/HBASE-17020 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4 >Reporter: Yu Sun >Assignee: Yu Sun > Fix For: 2.0.0, 1.4.0, 1.2.5, 0.98.24, 1.1.8 > > Attachments: HBASE-17020-branch-0.98.patch, HBASE-17020-v1.patch, > HBASE-17020-v2.patch, HBASE-17020-v2.patch, HBASE-17020-v3-branch1.1.patch, > HBASE-17020.branch-0.98.patch, HBASE-17020.branch-0.98.patch, > HBASE-17020.branch-1.1.patch > > > in CellBasedKeyBlockIndexReader.midkey(): > {code} > ByteBuff b = midLeafBlock.getBufferWithoutHeader(); > int numDataBlocks = b.getIntAfterPosition(0); > int keyRelOffset = b.getIntAfterPosition(Bytes.SIZEOF_INT * > (midKeyEntry + 1)); > int keyLen = b.getIntAfterPosition(Bytes.SIZEOF_INT * (midKeyEntry > + 2)) - keyRelOffset; > {code} > the local varible keyLen get this should be total length of: > SECONDARY_INDEX_ENTRY_OVERHEAD + firstKey.length; > the code is: > {code} > void add(byte[] firstKey, long blockOffset, int onDiskDataSize, > long curTotalNumSubEntries) { > // Record the offset for the secondary index > secondaryIndexOffsetMarks.add(curTotalNonRootEntrySize); > curTotalNonRootEntrySize += SECONDARY_INDEX_ENTRY_OVERHEAD > + firstKey.length; > {code} > when the midkey last entry of a leaf-level index block, this may throw: > {quote} > 2016-10-01 12:27:55,186 ERROR [MemStoreFlusher.0] > regionserver.MemStoreFlusher: Cache flusher failed for entry [flush region > pora_6_item_feature,0061:,1473838922457.12617bc4ebbfd171018bf96ac9bdd2a7.] > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray(ByteBufferUtils.java:936) > at > org.apache.hadoop.hbase.nio.SingleByteBuff.toBytes(SingleByteBuff.java:303) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.midkey(HFileBlockIndex.java:419) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.midkey(HFileReaderImpl.java:1519) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Reader.midkey(StoreFile.java:1520) > at > org.apache.hadoop.hbase.regionserver.StoreFile.getFileSplitPoint(StoreFile.java:706) > at > org.apache.hadoop.hbase.regionserver.DefaultStoreFileManager.getSplitPoint(DefaultStoreFileManager.java:126) > at > org.apache.hadoop.hbase.regionserver.HStore.getSplitPoint(HStore.java:1983) > at > org.apache.hadoop.hbase.regionserver.ConstantFamilySizeRegionSplitPolicy.getSplitPoint(ConstantFamilySizeRegionSplitPolicy.java:77) > at > org.apache.hadoop.hbase.regionserver.HRegion.checkSplit(HRegion.java:7756) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:513) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259) > at java.lang.Thread.run(Thread.java:756) > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17062) RegionSplitter throws ClassCastException
[ https://issues.apache.org/jira/browse/HBASE-17062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15659234#comment-15659234 ] Hudson commented on HBASE-17062: ABORTED: Integrated in Jenkins build HBase-1.4 #531 (See [https://builds.apache.org/job/HBase-1.4/531/]) HBASE-17062 RegionSplitter throws ClassCastException (Jeongdae Kim) (tedyu: rev 469462c850daa30b13780774c92d5c1180cd83ed) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java > RegionSplitter throws ClassCastException > > > Key: HBASE-17062 > URL: https://issues.apache.org/jira/browse/HBASE-17062 > Project: HBase > Issue Type: Bug > Components: util >Reporter: Jeongdae Kim >Assignee: Jeongdae Kim >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17062.001.patch, HBASE-17062.002.patch, > HBASE-17062.003.patch > > > RegionSplitter throws Exception as below. > Exception in thread "main" java.lang.ClassCastException: > org.apache.hadoop.hbase.ServerName cannot be cast to java.lang.String > at java.lang.String.compareTo(String.java:108) > at java.util.TreeMap.getEntry(TreeMap.java:346) > at java.util.TreeMap.get(TreeMap.java:273) > at > org.apache.hadoop.hbase.util.RegionSplitter$1.compare(RegionSplitter.java:504) > at > org.apache.hadoop.hbase.util.RegionSplitter$1.compare(RegionSplitter.java:502) > at java.util.TimSort.countRunAndMakeAscending(TimSort.java:324) > at java.util.TimSort.sort(TimSort.java:189) > at java.util.TimSort.sort(TimSort.java:173) > at java.util.Arrays.sort(Arrays.java:659) > at java.util.Collections.sort(Collections.java:217) > at > org.apache.hadoop.hbase.util.RegionSplitter.rollingSplit(RegionSplitter.java:502) > at > org.apache.hadoop.hbase.util.RegionSplitter.main(RegionSplitter.java:367) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16962) Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API
[ https://issues.apache.org/jira/browse/HBASE-16962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15659236#comment-15659236 ] Hudson commented on HBASE-16962: ABORTED: Integrated in Jenkins build HBase-1.4 #531 (See [https://builds.apache.org/job/HBase-1.4/531/]) HBASE-16962: Add readPoint to preCompactScannerOpen() and (anoopsamjohn: rev 44ab659b933afed2df323b031181fbcb52c85b61) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFlusher.java > Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API > -- > > Key: HBASE-16962 > URL: https://issues.apache.org/jira/browse/HBASE-16962 > Project: HBase > Issue Type: Bug >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16956.branch-1.001.patch, > HBASE-16956.master.006.patch, HBASE-16962.master.001.patch, > HBASE-16962.master.002.patch, HBASE-16962.master.003.patch, > HBASE-16962.master.004.patch, HBASE-16962.rough.patch > > > Similar to HBASE-15759, I would like to add readPoint to the > preCompactScannerOpen() API. > I have a CP where I create a StoreScanner() as part of the > preCompactScannerOpen() API. I need the readpoint which was obtained in > Compactor.compact() method to be consistent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17077) Don't copy the replication queue which belong to the peer has been deleted
[ https://issues.apache.org/jira/browse/HBASE-17077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15659136#comment-15659136 ] Hadoop QA commented on HBASE-17077: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 24s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 8s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 56s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 132m 44s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.master.normalizer.TestSimpleRegionNormalizerOnCluster | | | org.apache.hadoop.hbase.master.TestSplitLogManager | | | org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor | | | org.apache.hadoop.hbase.TestHBaseOnOtherDfsCluster | | | org.apache.hadoop.hbase.mapred.TestMultiTableSnapshotInputFormat | | | org.apache.hadoop.hbase.replication.regionserver.TestReplicationSourceManagerZkImpl | | | org.apache.hadoop.hbase.TestPartialResultsFromClientSide | | | org.apache.hadoop.hbase.mapred.TestTableSnapshotInputFormat | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838643/HBASE-17077.patch | | JIRA Issue | HBASE-17077 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 95c4a1af384a 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build
[jira] [Commented] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658972#comment-15658972 ] Y. SREENIVASULU REDDY commented on HBASE-17075: --- I have installed the cluster in branch-1.0, and executed the truncate table scenario in that branch. And just check the code in branch-1.1. > Old regionStates are not clearing for truncated tables > --- > > Key: HBASE-17075 > URL: https://issues.apache.org/jira/browse/HBASE-17075 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.1.8 >Reporter: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 1.1.8 > > Attachments: HBASE-17075-branch-1.1.patch, HBASE-17075.001.patch > > > For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17077) Don't copy the replication queue which belong to the peer has been deleted
[ https://issues.apache.org/jira/browse/HBASE-17077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-17077: --- Status: Patch Available (was: Open) > Don't copy the replication queue which belong to the peer has been deleted > -- > > Key: HBASE-17077 > URL: https://issues.apache.org/jira/browse/HBASE-17077 > Project: HBase > Issue Type: Improvement >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Attachments: HBASE-17077.patch > > > When a region server is dead, then other live region servers will transfer > the dead rs's replication queue to their own queue. Now the live rs first > copy the wals queue to its own znode, then create a new replication source to > replicate the wals. But if the queue belong to a peer has been deleted, it > copy the queue, too. The current steps is: > 1. copy the queue to its own znode > 2. found the queue belong to a peer has been deleted > 3. remove the queue and don't create a new replication source for it > There is a small improvement. The live region server doesn't need to copy the > queue to its own znode. The new steps is: > 1. found the queue belong to a peer has been deleted > 2. remove the queue directly instead of copy it -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17077) Don't copy the replication queue which belong to the peer has been deleted
[ https://issues.apache.org/jira/browse/HBASE-17077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-17077: --- Attachment: HBASE-17077.patch > Don't copy the replication queue which belong to the peer has been deleted > -- > > Key: HBASE-17077 > URL: https://issues.apache.org/jira/browse/HBASE-17077 > Project: HBase > Issue Type: Improvement >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Attachments: HBASE-17077.patch > > > When a region server is dead, then other live region servers will transfer > the dead rs's replication queue to their own queue. Now the live rs first > copy the wals queue to its own znode, then create a new replication source to > replicate the wals. But if the queue belong to a peer has been deleted, it > copy the queue, too. The current steps is: > 1. copy the queue to its own znode > 2. found the queue belong to a peer has been deleted > 3. remove the queue and don't create a new replication source for it > There is a small improvement. The live region server doesn't need to copy the > queue to its own znode. The new steps is: > 1. found the queue belong to a peer has been deleted > 2. remove the queue directly instead of copy it -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17077) Don't copy the replication queue which belong to the peer has been deleted
[ https://issues.apache.org/jira/browse/HBASE-17077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-17077: --- Description: When a region server is dead, then other live region servers will transfer the dead rs's replication queue to their own queue. Now the live rs first copy the wals queue to its own znode, then create a new replication source to replicate the wals. But if the queue belong to a peer has been deleted, it copy the queue, too. The current steps is: 1. copy the queue to its own znode 2. found the queue belong to a peer has been deleted 3. remove the queue and don't create a new replication source for it There is a small improvement. The live region server doesn't need to copy the queue to its own znode. The new steps is: 1. found the queue belong to a peer has been deleted 2. remove the queue directly instead of copy it was: When a region server is dead, then other live region servers will transfer the dead rs's replication queue to their own queue. Now the live rs first copy the wals queue to its own znode, then create a new replication source to replicate the wals. But if the queue belong to a peer have been deleted, it copy the queue, too. The current steps is: 1. copy the queue to its own znode 2. found the queue belong to a peer have been deleted 3. remove the queue and don't create a new replication source for it There is a small improvement. The live region server doesn't need to copy the queue to its own znode. The new steps is: 1. found the queue belong to a peer have been deleted 2. remove the queue directly instead of copy it > Don't copy the replication queue which belong to the peer has been deleted > -- > > Key: HBASE-17077 > URL: https://issues.apache.org/jira/browse/HBASE-17077 > Project: HBase > Issue Type: Improvement >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > > When a region server is dead, then other live region servers will transfer > the dead rs's replication queue to their own queue. Now the live rs first > copy the wals queue to its own znode, then create a new replication source to > replicate the wals. But if the queue belong to a peer has been deleted, it > copy the queue, too. The current steps is: > 1. copy the queue to its own znode > 2. found the queue belong to a peer has been deleted > 3. remove the queue and don't create a new replication source for it > There is a small improvement. The live region server doesn't need to copy the > queue to its own znode. The new steps is: > 1. found the queue belong to a peer has been deleted > 2. remove the queue directly instead of copy it -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17077) Don't copy the replication queue which belong to the peer has been deleted
[ https://issues.apache.org/jira/browse/HBASE-17077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-17077: --- Summary: Don't copy the replication queue which belong to the peer has been deleted (was: Don't copy the replication queue which belong to the peer have been deleted) > Don't copy the replication queue which belong to the peer has been deleted > -- > > Key: HBASE-17077 > URL: https://issues.apache.org/jira/browse/HBASE-17077 > Project: HBase > Issue Type: Improvement >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > > When a region server is dead, then other live region servers will transfer > the dead rs's replication queue to their own queue. Now the live rs first > copy the wals queue to its own znode, then create a new replication source to > replicate the wals. But if the queue belong to a peer have been deleted, it > copy the queue, too. The current steps is: > 1. copy the queue to its own znode > 2. found the queue belong to a peer have been deleted > 3. remove the queue and don't create a new replication source for it > There is a small improvement. The live region server doesn't need to copy the > queue to its own znode. The new steps is: > 1. found the queue belong to a peer have been deleted > 2. remove the queue directly instead of copy it -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17077) Don't copy the replication queue which belong to the peer have been deleted
Guanghao Zhang created HBASE-17077: -- Summary: Don't copy the replication queue which belong to the peer have been deleted Key: HBASE-17077 URL: https://issues.apache.org/jira/browse/HBASE-17077 Project: HBase Issue Type: Improvement Reporter: Guanghao Zhang Assignee: Guanghao Zhang Priority: Minor When a region server is dead, then other live region servers will transfer the dead rs's replication queue to their own queue. Now the live rs first copy the wals queue to its own znode, then create a new replication source to replicate the wals. But if the queue belong to a peer have been deleted, it copy the queue, too. The current steps is: 1. copy the queue to its own znode 2. found the queue belong to a peer have been deleted 3. remove the queue and don't create a new replication source for it There is a small improvement. The live region server doesn't need to copy the queue to its own znode. The new steps is: 1. found the queue belong to a peer have been deleted 2. remove the queue directly instead of copy it -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16853) Regions are assigned to Region Servers in /hbase/draining after HBase Master failover
[ https://issues.apache.org/jira/browse/HBASE-16853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-16853: --- Fix Version/s: 0.98.24 > Regions are assigned to Region Servers in /hbase/draining after HBase Master > failover > - > > Key: HBASE-16853 > URL: https://issues.apache.org/jira/browse/HBASE-16853 > Project: HBase > Issue Type: Bug > Components: Balancer, Region Assignment >Affects Versions: 2.0.0, 1.3.0 >Reporter: David Pope >Assignee: David Pope > Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.24 > > Attachments: 16853.v2.txt, HBASE-16853.branch-1.3-v1.patch, > HBASE-16853.branch-1.3-v2.patch > > > h2. Problem > If there are Region Servers registered as "draining", they will continue to > have "draining" znodes after a HMaster failover; however, the balancer will > assign regions to them. > h2. How to reproduce (on hbase master): > # Add regionserver to /hbase/draining: {{bin/hbase-jruby > bin/draining_servers.rb add server1:16205}} > # Unload the regionserver: {{bin/hbase-jruby bin/region_mover.rb unload > server1:16205}} > # Kill the Active HMaster and failover to the Backup HMaster > # Run the balancer: {{hbase shell <<< "balancer"}} > # Notice regions get assigned on new Active Master to Region Servers in > /hbase/draining > h2. Root Cause > The Backup HMaster initializes the {{DrainingServerTracker}} before the > Region Servers are registered as "online" with the {{ServerManager}}. As a > result, the {{ServerManager.drainingServers}} isn't populated with existing > Region Servers in draining when we have an HMaster failover. > E.g., > # We have a region server in draining: {{server1,16205,1000}} > # The {{RegionServerTracker}} starts up and adds a ZK watcher on the Znode > for this RegionServer: {{/hbase/rs/server1,16205,1000}} > # The {{DrainingServerTracker}} starts and processes each Znode under > {{/hbase/draining}}, but the Region Server isn't registered as "online" so it > isn't added to the {{ServerManager.drainingServers}} list. > # The Region Server is added to the {{DrainingServerTracker.drainingServers}} > list. > # The Region Server's Znode watcher is triggered and the ZK watcher is > restarted. > # The Region Server is registered with {{ServerManager}} as "online". > *END STATE:* The Region Server has a Znode in {{/hbase/draining}}, but it is > registered as "online" and the Balancer will start assigning regions to it. > {code} > $ bin/hbase-jruby bin/draining_servers.rb list > [1] server1,16205,1000 > $ grep server1,16205,1000 logs/master-server1.log > 2016-10-14 16:02:47,713 DEBUG [server1:16001.activeMasterManager] > zookeeper.ZKUtil: master:16001-0x157c56adc810014, quorum=localhost:2181, > baseZNode=/hbase Set watcher on existing znode=/hbase/rs/server1,16205,1000 > [2] 2016-10-14 16:02:47,722 DEBUG [server1:16001.activeMasterManager] > zookeeper.RegionServerTracker: Added tracking of RS > /hbase/rs/server1,16205,1000 > 2016-10-14 16:02:47,730 DEBUG [server1:16001.activeMasterManager] > zookeeper.ZKUtil: master:16001-0x157c56adc810014, quorum=localhost:2181, > baseZNode=/hbase Set watcher on existing > znode=/hbase/draining/server1,16205,1000 > [3] 2016-10-14 16:02:47,731 WARN [server1:16001.activeMasterManager] > master.ServerManager: Server server1,16205,1000 is not currently online. > Ignoring request to add it to draining list. > [4] 2016-10-14 16:02:47,731 INFO [server1:16001.activeMasterManager] > zookeeper.DrainingServerTracker: Draining RS node created, adding to list > [server1,16205,1000] > 2016-10-14 16:02:47,971 DEBUG [main-EventThread] zookeeper.ZKUtil: > master:16001-0x157c56adc810014, quorum=localhost:2181, baseZNode=/hbase Set > watcher on existing > znode=/hbase/rs/dev6918.prn2.facebook.com,16205,1476486047114 > [5] 2016-10-14 16:02:47,976 DEBUG [main-EventThread] > zookeeper.RegionServerTracker: Added tracking of RS > /hbase/rs/server1,16205,1000 > [6] 2016-10-14 16:02:52,084 INFO > [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=16001] > master.ServerManager: Registering server=server1,16205,1000 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16816) HMaster.move() should throw exception if region to move is not online
[ https://issues.apache.org/jira/browse/HBASE-16816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-16816: --- Fix Version/s: 0.98.24 > HMaster.move() should throw exception if region to move is not online > - > > Key: HBASE-16816 > URL: https://issues.apache.org/jira/browse/HBASE-16816 > Project: HBase > Issue Type: Bug > Components: Admin >Affects Versions: 1.1.2 >Reporter: Allan Yang >Assignee: Allan Yang >Priority: Minor > Fix For: 1.4.0, 0.98.24 > > Attachments: HBASE-16816-branch-1-v2.patch, > HBASE-16816-branch-1-v3.patch, HBASE-16816-branch-1.patch > > > The move region function in HMaster only checks whether the region to move > exists > {code} > if (regionState == null) { > throw new > UnknownRegionException(Bytes.toStringBinary(encodedRegionName)); > } > {code} > It will not return anything if the region is split or in transition which is > not movable. So the caller has no way to know if the move region operation is > failed. > It is a problem for "region_move.rb". It only gives up moving a region if a > exception is thrown.Otherwise, it will wait until a timeout and retry. > Without a exception, it have no idea the region is not movable. > {code} > begin > admin.move(Bytes.toBytes(r.getEncodedName()), Bytes.toBytes(newServer)) > rescue java.lang.reflect.UndeclaredThrowableException, > org.apache.hadoop.hbase.UnknownRegionException => e > $LOG.info("Exception moving " + r.getEncodedName() + > "; split/moved? Continuing: " + e) > return > end > # Wait till its up on new server before moving on > maxWaitInSeconds = admin.getConfiguration.getInt("hbase.move.wait.max", > 60) > maxWait = Time.now + maxWaitInSeconds > while Time.now < maxWait > same = isSameServer(admin, r, original) > break unless same > sleep 0.1 > end > end > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14882) Provide a Put API that adds the provided family, qualifier, value without copying
[ https://issues.apache.org/jira/browse/HBASE-14882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658804#comment-15658804 ] Xiang Li commented on HBASE-14882: -- Hi Anoop, I see, got your idea. Thanks! I will update the patch to (1) Implement ExtendedCell (2) Add some comments to explain why IndividualBytesFieldCell at client end, has ExtendedCell implemented Please correct me if I did not get your idea. > Provide a Put API that adds the provided family, qualifier, value without > copying > - > > Key: HBASE-14882 > URL: https://issues.apache.org/jira/browse/HBASE-14882 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.0 >Reporter: Jerry He >Assignee: Xiang Li > Fix For: 2.0.0 > > Attachments: HBASE-14882.master.000.patch, > HBASE-14882.master.001.patch, HBASE-14882.master.002.patch, > HBASE-14882.master.003.patch > > > In the Put API, we have addImmutable() > {code} > /** >* See {@link #addColumn(byte[], byte[], byte[])}. This version expects >* that the underlying arrays won't change. It's intended >* for usage internal HBase to and for advanced client applications. >*/ > public Put addImmutable(byte [] family, byte [] qualifier, byte [] value) > {code} > But in the implementation, the family, qualifier and value are still being > copied locally to create kv. > Hopefully we should provide an API that truly uses immutable family, > qualifier and value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14069) Add the ability for RegionSplitter to rolling split without using a SplitAlgorithm
[ https://issues.apache.org/jira/browse/HBASE-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658766#comment-15658766 ] Lijun Tang commented on HBASE-14069: Hi all, because the previous patch was not accepted, so I will continue working on this. > Add the ability for RegionSplitter to rolling split without using a > SplitAlgorithm > -- > > Key: HBASE-14069 > URL: https://issues.apache.org/jira/browse/HBASE-14069 > Project: HBase > Issue Type: New Feature > Components: hbase, util >Reporter: Elliott Clark >Assignee: Lijun Tang > Attachments: 0001-Improve-RegionSplitter-v1.patch, > 0001-Improve-RegionSplitter.patch > > > RegionSplittler is the utility that can rolling split regions. It would be > nice to be able to split regions and have the normal split points get > computed for me so that I'm not reliant on knowing data distribution. > Tested manually on standalone mode for various test cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15531) Favored Nodes Enhancements
[ https://issues.apache.org/jira/browse/HBASE-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-15531: - Fix Version/s: 2.0.0 > Favored Nodes Enhancements > -- > > Key: HBASE-15531 > URL: https://issues.apache.org/jira/browse/HBASE-15531 > Project: HBase > Issue Type: Umbrella >Reporter: Francis Liu >Assignee: Francis Liu > Fix For: 2.0.0 > > > We been working on enhancing favored nodes at Yahoo! See draft document. > Feel free to comment. I'll add more info. > https://docs.google.com/document/d/1948RKX_-kGUNOHjiYFiKPZnmKWybJkahJYsNibVhcCk/edit?usp=sharing > These enhancements have recently started running in production. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14069) Add the ability for RegionSplitter to rolling split without using a SplitAlgorithm
[ https://issues.apache.org/jira/browse/HBASE-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658755#comment-15658755 ] Hadoop QA commented on HBASE-14069: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s {color} | {color:red} HBASE-14069 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12747108/0001-Improve-RegionSplitter-v1.patch | | JIRA Issue | HBASE-14069 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/4440/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Add the ability for RegionSplitter to rolling split without using a > SplitAlgorithm > -- > > Key: HBASE-14069 > URL: https://issues.apache.org/jira/browse/HBASE-14069 > Project: HBase > Issue Type: New Feature > Components: hbase, util >Reporter: Elliott Clark >Assignee: Lijun Tang > Attachments: 0001-Improve-RegionSplitter-v1.patch, > 0001-Improve-RegionSplitter.patch > > > RegionSplittler is the utility that can rolling split regions. It would be > nice to be able to split regions and have the normal split points get > computed for me so that I'm not reliant on knowing data distribution. > Tested manually on standalone mode for various test cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-14069) Add the ability for RegionSplitter to rolling split without using a SplitAlgorithm
[ https://issues.apache.org/jira/browse/HBASE-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lijun Tang reassigned HBASE-14069: -- Assignee: Lijun Tang (was: Abhilash) > Add the ability for RegionSplitter to rolling split without using a > SplitAlgorithm > -- > > Key: HBASE-14069 > URL: https://issues.apache.org/jira/browse/HBASE-14069 > Project: HBase > Issue Type: New Feature > Components: hbase, util >Reporter: Elliott Clark >Assignee: Lijun Tang > Attachments: 0001-Improve-RegionSplitter-v1.patch, > 0001-Improve-RegionSplitter.patch > > > RegionSplittler is the utility that can rolling split regions. It would be > nice to be able to split regions and have the normal split points get > computed for me so that I'm not reliant on knowing data distribution. > Tested manually on standalone mode for various test cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658653#comment-15658653 ] Stephen Yuan Jiang commented on HBASE-17075: The handler code was removed in 1.2 and newer releases. The reason we kept handler code in 1.1 is that 1.1 is first release that we switch to Procedure V2 code base; I had a config (HBASE-13469) to allow user to use the old handler code. Since this config is not on by default, I don't think [~sreenivasulureddy] uses this config. This said, I don't think it is worth the effort to fix this issue in handler code. > Old regionStates are not clearing for truncated tables > --- > > Key: HBASE-17075 > URL: https://issues.apache.org/jira/browse/HBASE-17075 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.1.8 >Reporter: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 1.1.8 > > Attachments: HBASE-17075-branch-1.1.patch, HBASE-17075.001.patch > > > For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-10640) Add compilation step in QA run against hadoop 2.2.0
[ https://issues.apache.org/jira/browse/HBASE-10640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved HBASE-10640. Resolution: Won't Fix Stale issue. > Add compilation step in QA run against hadoop 2.2.0 > --- > > Key: HBASE-10640 > URL: https://issues.apache.org/jira/browse/HBASE-10640 > Project: HBase > Issue Type: Sub-task >Reporter: Ted Yu > > As Andrew pointed out: > https://issues.apache.org/jira/browse/HBASE-10601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=13911869#comment-13911869 > bq. Yes, now that trunk has moved to 2.3.0 by default but 0.96 and 0.98 still > use 2.2.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658543#comment-15658543 ] Ted Yu commented on HBASE-17075: Since TruncateTableHandler is not used, should we take it out (of the repo) ? > Old regionStates are not clearing for truncated tables > --- > > Key: HBASE-17075 > URL: https://issues.apache.org/jira/browse/HBASE-17075 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.1.8 >Reporter: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 1.1.8 > > Attachments: HBASE-17075-branch-1.1.patch, HBASE-17075.001.patch > > > For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658539#comment-15658539 ] Hadoop QA commented on HBASE-17075: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 37s {color} | {color:green} branch-1.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} branch-1.1 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} branch-1.1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} branch-1.1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s {color} | {color:red} hbase-server in branch-1.1 has 80 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s {color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0_111. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s {color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 12m 4s {color} | {color:green} The patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_111. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 33s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 105m 12s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint | | | org.apache.hadoop.hbase.replication.TestMultiSlaveReplication | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:35e2245 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838618/HBASE-17075-branch-1.1.patch | | JIRA Issue | HBASE-17075 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck
[jira] [Commented] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658462#comment-15658462 ] Stephen Yuan Jiang commented on HBASE-17075: TruncateTableHandler is unused in 1.1.x release (it is replaced by TruncateTableProcedure in HBASE-13455). [~sreenivasulureddy], the code you change would not be executed in 1.1.x code base. In the TruncateTableProcedure, the state was cleaned correctly: {code} case TRUNCATE_TABLE_REMOVE_FROM_META: hTableDescriptor = env.getMasterServices().getTableDescriptors().get(tableName); DeleteTableProcedure.deleteFromMeta(env, getTableName(), regions); DeleteTableProcedure.deleteAssignmentState(env, getTableName()); {code} [~sreenivasulureddy], did you see a problem from your cluster, or just read the code? > Old regionStates are not clearing for truncated tables > --- > > Key: HBASE-17075 > URL: https://issues.apache.org/jira/browse/HBASE-17075 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.1.8 >Reporter: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 1.1.8 > > Attachments: HBASE-17075-branch-1.1.patch, HBASE-17075.001.patch > > > For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658418#comment-15658418 ] Ted Yu commented on HBASE-17075: {code} 76 } finally { 77AssignmentManager am = this.masterServices.getAssignmentManager(); {code} The cleanup is done in case of error, too. Do we need to make distinction between error and normal execution ? > Old regionStates are not clearing for truncated tables > --- > > Key: HBASE-17075 > URL: https://issues.apache.org/jira/browse/HBASE-17075 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.1.8 >Reporter: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 1.1.8 > > Attachments: HBASE-17075-branch-1.1.patch, HBASE-17075.001.patch > > > For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17076) implement getAndPut() and getAndDelete()
[ https://issues.apache.org/jira/browse/HBASE-17076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658411#comment-15658411 ] Hadoop QA commented on HBASE-17076: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 3s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 13s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 3s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 2s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 21 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 43s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 25s {color} | {color:red} Patch generated 1 new protoc errors in hbase-server. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s {color} | {color:red} hbase-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 51s {color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s {color} | {color:red} hbase-client generated 2 new + 13 unchanged - 0 fixed = 15 total (was 13) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s {color} | {color:red} hbase-server generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s {color} | {color:green} hbase-protocol in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s {color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 12s {color} | {color:red} hbase-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s {color} | {color:green} hbase-hadoop-compat in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s {color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 22s {color} |
[jira] [Updated] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Y. SREENIVASULU REDDY updated HBASE-17075: -- Attachment: HBASE-17075-branch-1.1.patch [~tedyu] Addressed your comments, pls check. > Old regionStates are not clearing for truncated tables > --- > > Key: HBASE-17075 > URL: https://issues.apache.org/jira/browse/HBASE-17075 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.1.8 >Reporter: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 1.1.8 > > Attachments: HBASE-17075-branch-1.1.patch, HBASE-17075.001.patch > > > For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2
[ https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658163#comment-15658163 ] Hadoop QA commented on HBASE-14123: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 4s {color} | {color:blue} The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for instructions. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 7s {color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 47 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 0s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 43s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 20s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 20s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 14s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 4s {color} | {color:green} There were no new shellcheck issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s {color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s {color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 34s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 25s {color} | {color:red} Patch generated 1 new protoc errors in hbase-server. {color} | | {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 2m 6s {color} | {color:red} Patch generated 1 new protoc errors in .. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s {color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color}
[jira] [Updated] (HBASE-17076) implement getAndPut() and getAndDelete()
[ https://issues.apache.org/jira/browse/HBASE-17076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ChiaPing Tsai updated HBASE-17076: -- Attachment: HBASE-17076-v0.patch > implement getAndPut() and getAndDelete() > > > Key: HBASE-17076 > URL: https://issues.apache.org/jira/browse/HBASE-17076 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0 >Reporter: ChiaPing Tsai >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-17076-v0.patch > > > We implement the getAndPut() and getAndDelete() by coprocessor, but there are > a lot of duplicate effort (e.g., data checks, row lock, returned value, and > wal). It is cool if we provide the compare-and-swap primitive. > The draft patch is attached. Any advice and suggestions will be greatly > appreciated. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17076) implement getAndPut() and getAndDelete()
[ https://issues.apache.org/jira/browse/HBASE-17076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ChiaPing Tsai updated HBASE-17076: -- Status: Patch Available (was: Open) > implement getAndPut() and getAndDelete() > > > Key: HBASE-17076 > URL: https://issues.apache.org/jira/browse/HBASE-17076 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0 >Reporter: ChiaPing Tsai >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-17076-v0.patch > > > We implement the getAndPut() and getAndDelete() by coprocessor, but there are > a lot of duplicate effort (e.g., data checks, row lock, returned value, and > wal). It is cool if we provide the compare-and-swap primitive. > The draft patch is attached. Any advice and suggestions will be greatly > appreciated. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17076) implement getAndPut() and getAndDelete()
[ https://issues.apache.org/jira/browse/HBASE-17076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ChiaPing Tsai updated HBASE-17076: -- Description: We implement the getAndPut() and getAndDelete() by coprocessor, but there are a lot of duplicate effort (e.g., data checks, row lock, returned value, and wal). It is cool if we provide the compare-and-swap primitive. The draft patch is attached. Any advice and suggestions will be greatly appreciated. Thanks. was:We implement the getAndPut() and getAndDelete() by coprocessor, but there are a lot of duplicate effort (e.g., data checks, row lock, returned value, and wal). It is cool if we provide the compare-and-swap primitive. > implement getAndPut() and getAndDelete() > > > Key: HBASE-17076 > URL: https://issues.apache.org/jira/browse/HBASE-17076 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0 >Reporter: ChiaPing Tsai >Priority: Minor > Fix For: 2.0.0 > > > We implement the getAndPut() and getAndDelete() by coprocessor, but there are > a lot of duplicate effort (e.g., data checks, row lock, returned value, and > wal). It is cool if we provide the compare-and-swap primitive. > The draft patch is attached. Any advice and suggestions will be greatly > appreciated. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17076) implement getAndPut() and getAndDelete()
ChiaPing Tsai created HBASE-17076: - Summary: implement getAndPut() and getAndDelete() Key: HBASE-17076 URL: https://issues.apache.org/jira/browse/HBASE-17076 Project: HBase Issue Type: New Feature Affects Versions: 2.0.0 Reporter: ChiaPing Tsai Priority: Minor Fix For: 2.0.0 We implement the getAndPut() and getAndDelete() by coprocessor, but there are a lot of duplicate effort (e.g., data checks, row lock, returned value, and wal). It is cool if we provide the compare-and-swap primitive. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16962) Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API
[ https://issues.apache.org/jira/browse/HBASE-16962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658098#comment-15658098 ] Thiruvel Thirumoolan commented on HBASE-16962: -- Thanks for all the reviews! > Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API > -- > > Key: HBASE-16962 > URL: https://issues.apache.org/jira/browse/HBASE-16962 > Project: HBase > Issue Type: Bug >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16956.branch-1.001.patch, > HBASE-16956.master.006.patch, HBASE-16962.master.001.patch, > HBASE-16962.master.002.patch, HBASE-16962.master.003.patch, > HBASE-16962.master.004.patch, HBASE-16962.rough.patch > > > Similar to HBASE-15759, I would like to add readPoint to the > preCompactScannerOpen() API. > I have a CP where I create a StoreScanner() as part of the > preCompactScannerOpen() API. I need the readpoint which was obtained in > Compactor.compact() method to be consistent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances
[ https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658076#comment-15658076 ] Andrew Purtell commented on HBASE-17069: I have a bisect in progress. It's taken me a while to get a good signal. I am confident about these boundaries: Bad: a12d0a861db850ded1a66d6be8e3a4a9d2c76a4f Good (when patched for HBASE-15315 and HBASE-16093): 1a305bb4848ebcda2bd7c0df8f2f9c03ddf5b471 > RegionServer writes invalid META entries for split daughters in some > circumstances > -- > > Key: HBASE-17069 > URL: https://issues.apache.org/jira/browse/HBASE-17069 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.4 >Reporter: Andrew Purtell >Priority: Critical > Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, > daughter_2_08629d59564726da2497f70451aafcdb.log, logs.tar.gz, > parent-393d2bfd8b1c52ce08540306659624f2.log > > > I have been seeing frequent ITBLL failures testing various versions of 1.2.x. > Over the lifetime of 1.2.x the following issues have been fixed: > - HBASE-15315 (Remove always set super user call as high priority) > - HBASE-16093 (Fix splits failed before creating daughter regions leave meta > inconsistent) > And this one is pending: > - HBASE-17044 (Fix merge failed before creating merged region leaves meta > inconsistent) > I can apply all of the above to branch-1.2 and still see this failure: > *The life of stillborn region d55ef81c2f8299abbddfce0445067830* > *Master sees SPLITTING_NEW* > {noformat} > 2016-11-08 04:23:21,186 INFO [AM.ZK.Worker-pool2-t82] master.RegionStates: > Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, > ts=1478579001186, server=node-3.cluster,16020,1478578389506} > {noformat} > *The RegionServer creates it* > {noformat} > 2016-11-08 04:23:26,035 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, > currentSize=14996112, freeSize=12823716208, maxSize=12838712320, > heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,038 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for big: blockCache=LruBlockCache{blockCount=34, > currentSize=14996112, freeSize=12823716208, maxSize=12838712320, > heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,442 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, > currentSize=17187656, freeSize=12821524664, maxSize=12838712320, > heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,713 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, > currentSize=19178440, freeSize=12819533880, maxSize=12838712320, > heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,715 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, > currentSize=19178440, freeSize=12819533880, maxSize=12838712320, > heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,717 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for
[jira] [Comment Edited] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances
[ https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658076#comment-15658076 ] Andrew Purtell edited comment on HBASE-17069 at 11/11/16 8:25 PM: -- I have a bisect in progress. It's taken me a while to get a good signal. These boundaries have been determined by failure and success of multiple 1B ITBLL jobs, respectively: Bad: a12d0a861db850ded1a66d6be8e3a4a9d2c76a4f Good (when patched for HBASE-15315 and HBASE-16093): 1a305bb4848ebcda2bd7c0df8f2f9c03ddf5b471 There are a few steps within this range. Working on it was (Author: apurtell): I have a bisect in progress. It's taken me a while to get a good signal. These boundaries have been determined by failure and success of multiple 1B ITBLL jobs, respectively: Bad: a12d0a861db850ded1a66d6be8e3a4a9d2c76a4f Good (when patched for HBASE-15315 and HBASE-16093): 1a305bb4848ebcda2bd7c0df8f2f9c03ddf5b471 > RegionServer writes invalid META entries for split daughters in some > circumstances > -- > > Key: HBASE-17069 > URL: https://issues.apache.org/jira/browse/HBASE-17069 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.4 >Reporter: Andrew Purtell >Priority: Critical > Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, > daughter_2_08629d59564726da2497f70451aafcdb.log, logs.tar.gz, > parent-393d2bfd8b1c52ce08540306659624f2.log > > > I have been seeing frequent ITBLL failures testing various versions of 1.2.x. > Over the lifetime of 1.2.x the following issues have been fixed: > - HBASE-15315 (Remove always set super user call as high priority) > - HBASE-16093 (Fix splits failed before creating daughter regions leave meta > inconsistent) > And this one is pending: > - HBASE-17044 (Fix merge failed before creating merged region leaves meta > inconsistent) > I can apply all of the above to branch-1.2 and still see this failure: > *The life of stillborn region d55ef81c2f8299abbddfce0445067830* > *Master sees SPLITTING_NEW* > {noformat} > 2016-11-08 04:23:21,186 INFO [AM.ZK.Worker-pool2-t82] master.RegionStates: > Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, > ts=1478579001186, server=node-3.cluster,16020,1478578389506} > {noformat} > *The RegionServer creates it* > {noformat} > 2016-11-08 04:23:26,035 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, > currentSize=14996112, freeSize=12823716208, maxSize=12838712320, > heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,038 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for big: blockCache=LruBlockCache{blockCount=34, > currentSize=14996112, freeSize=12823716208, maxSize=12838712320, > heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,442 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, > currentSize=17187656, freeSize=12821524664, maxSize=12838712320, > heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,713 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, > currentSize=19178440, freeSize=12819533880, maxSize=12838712320, > heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,715 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, >
[jira] [Comment Edited] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances
[ https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658076#comment-15658076 ] Andrew Purtell edited comment on HBASE-17069 at 11/11/16 8:25 PM: -- I have a bisect in progress. It's taken me a while to get a good signal. These boundaries have been determined by failure and success of multiple 1B ITBLL jobs, respectively: Bad: a12d0a861db850ded1a66d6be8e3a4a9d2c76a4f Good (when patched for HBASE-15315 and HBASE-16093): 1a305bb4848ebcda2bd7c0df8f2f9c03ddf5b471 was (Author: apurtell): I have a bisect in progress. It's taken me a while to get a good signal. I am confident about these boundaries: Bad: a12d0a861db850ded1a66d6be8e3a4a9d2c76a4f Good (when patched for HBASE-15315 and HBASE-16093): 1a305bb4848ebcda2bd7c0df8f2f9c03ddf5b471 > RegionServer writes invalid META entries for split daughters in some > circumstances > -- > > Key: HBASE-17069 > URL: https://issues.apache.org/jira/browse/HBASE-17069 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.4 >Reporter: Andrew Purtell >Priority: Critical > Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, > daughter_2_08629d59564726da2497f70451aafcdb.log, logs.tar.gz, > parent-393d2bfd8b1c52ce08540306659624f2.log > > > I have been seeing frequent ITBLL failures testing various versions of 1.2.x. > Over the lifetime of 1.2.x the following issues have been fixed: > - HBASE-15315 (Remove always set super user call as high priority) > - HBASE-16093 (Fix splits failed before creating daughter regions leave meta > inconsistent) > And this one is pending: > - HBASE-17044 (Fix merge failed before creating merged region leaves meta > inconsistent) > I can apply all of the above to branch-1.2 and still see this failure: > *The life of stillborn region d55ef81c2f8299abbddfce0445067830* > *Master sees SPLITTING_NEW* > {noformat} > 2016-11-08 04:23:21,186 INFO [AM.ZK.Worker-pool2-t82] master.RegionStates: > Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, > ts=1478579001186, server=node-3.cluster,16020,1478578389506} > {noformat} > *The RegionServer creates it* > {noformat} > 2016-11-08 04:23:26,035 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, > currentSize=14996112, freeSize=12823716208, maxSize=12838712320, > heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,038 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for big: blockCache=LruBlockCache{blockCount=34, > currentSize=14996112, freeSize=12823716208, maxSize=12838712320, > heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,442 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, > currentSize=17187656, freeSize=12821524664, maxSize=12838712320, > heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,713 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, > currentSize=19178440, freeSize=12819533880, maxSize=12838712320, > heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,715 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, > currentSize=19178440, freeSize=12819533880, maxSize=12838712320, > heapSize=19178440, minSize=12196776960, minFactor=0.95,
[jira] [Updated] (HBASE-14123) HBase Backup/Restore Phase 2
[ https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-14123: --- Fix Version/s: 2.0.0 > HBase Backup/Restore Phase 2 > > > Key: HBASE-14123 > URL: https://issues.apache.org/jira/browse/HBASE-14123 > Project: HBase > Issue Type: Umbrella >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Blocker > Fix For: 2.0.0 > > Attachments: 14123-master.v14.txt, 14123-master.v15.txt, > 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, > 14123-master.v19.txt, 14123-master.v2.txt, 14123-master.v20.txt, > 14123-master.v21.txt, 14123-master.v24.txt, 14123-master.v25.txt, > 14123-master.v27.txt, 14123-master.v28.txt, 14123-master.v29.full.txt, > 14123-master.v3.txt, 14123-master.v30.txt, 14123-master.v31.txt, > 14123-master.v32.txt, 14123-master.v33.txt, 14123-master.v34.txt, > 14123-master.v35.txt, 14123-master.v36.txt, 14123-master.v37.txt, > 14123-master.v5.txt, 14123-master.v6.txt, 14123-master.v7.txt, > 14123-master.v8.txt, 14123-master.v9.txt, 14123-v14.txt, > HBASE-14123-for-7912-v1.patch, HBASE-14123-for-7912-v6.patch, > HBASE-14123-v1.patch, HBASE-14123-v10.patch, HBASE-14123-v11.patch, > HBASE-14123-v12.patch, HBASE-14123-v13.patch, HBASE-14123-v15.patch, > HBASE-14123-v16.patch, HBASE-14123-v2.patch, HBASE-14123-v3.patch, > HBASE-14123-v4.patch, HBASE-14123-v5.patch, HBASE-14123-v6.patch, > HBASE-14123-v7.patch, HBASE-14123-v9.patch > > > Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14123) HBase Backup/Restore Phase 2
[ https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-14123: --- Priority: Blocker (was: Major) > HBase Backup/Restore Phase 2 > > > Key: HBASE-14123 > URL: https://issues.apache.org/jira/browse/HBASE-14123 > Project: HBase > Issue Type: Umbrella >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Blocker > Fix For: 2.0.0 > > Attachments: 14123-master.v14.txt, 14123-master.v15.txt, > 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, > 14123-master.v19.txt, 14123-master.v2.txt, 14123-master.v20.txt, > 14123-master.v21.txt, 14123-master.v24.txt, 14123-master.v25.txt, > 14123-master.v27.txt, 14123-master.v28.txt, 14123-master.v29.full.txt, > 14123-master.v3.txt, 14123-master.v30.txt, 14123-master.v31.txt, > 14123-master.v32.txt, 14123-master.v33.txt, 14123-master.v34.txt, > 14123-master.v35.txt, 14123-master.v36.txt, 14123-master.v37.txt, > 14123-master.v5.txt, 14123-master.v6.txt, 14123-master.v7.txt, > 14123-master.v8.txt, 14123-master.v9.txt, 14123-v14.txt, > HBASE-14123-for-7912-v1.patch, HBASE-14123-for-7912-v6.patch, > HBASE-14123-v1.patch, HBASE-14123-v10.patch, HBASE-14123-v11.patch, > HBASE-14123-v12.patch, HBASE-14123-v13.patch, HBASE-14123-v15.patch, > HBASE-14123-v16.patch, HBASE-14123-v2.patch, HBASE-14123-v3.patch, > HBASE-14123-v4.patch, HBASE-14123-v5.patch, HBASE-14123-v6.patch, > HBASE-14123-v7.patch, HBASE-14123-v9.patch > > > Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14350) Procedure V2 Phase 2: Assignment Manager
[ https://issues.apache.org/jira/browse/HBASE-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-14350: --- Priority: Blocker (was: Major) > Procedure V2 Phase 2: Assignment Manager > > > Key: HBASE-14350 > URL: https://issues.apache.org/jira/browse/HBASE-14350 > Project: HBase > Issue Type: Task > Components: master, proc-v2 >Affects Versions: 2.0.0 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang >Priority: Blocker > Fix For: 2.0.0 > > > This is the second phase of Procedure V2 (HBASE-12439). Built on top of > Phase 1 (HBASE-14336), implement a new Assignment Manager based on Proc-V2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-17046) Add 1.1 doc to hbase.apache.org
[ https://issues.apache.org/jira/browse/HBASE-17046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-17046. --- Resolution: Fixed Fix Version/s: 2.0.0 Release Note: Adds a 1.1. item to our 'Documentation and API' tab. Gives access to 1.1 APIs, XRef, etc. Added a 1.1 dir under a checkout of https://git-wip-us.apache.org/repos/asf/hbase-site.git Put a 1.1.7 site here and checked it all in. Added below change to master to add the 1.1 menu item. {code} commit d9316a64a9b271222c0cb4ea52f40a4dedc86676 Author: Michael StackDate: Mon Nov 7 22:16:32 2016 -0800 HBASE-17046 Add 1.1 doc to hbase.apache.org diff --git a/src/main/site/site.xml b/src/main/site/site.xml index 38dc0c1..a11e7e9 100644 --- a/src/main/site/site.xml +++ b/src/main/site/site.xml @@ -108,6 +108,11 @@ + + + + + {code} [~misty]'s "Successful: HBase Generate Website" quotidian dev list mailing was a big help here (Thanks [~misty]) > Add 1.1 doc to hbase.apache.org > --- > > Key: HBASE-17046 > URL: https://issues.apache.org/jira/browse/HBASE-17046 > Project: HBase > Issue Type: Task > Components: site >Reporter: stack >Assignee: stack > Fix For: 2.0.0 > > > Add link to 1.1 doc to the site. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657794#comment-15657794 ] Hadoop QA commented on HBASE-17075: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} HBASE-17075 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838596/HBASE-17075.001.patch | | JIRA Issue | HBASE-17075 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/4437/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Old regionStates are not clearing for truncated tables > --- > > Key: HBASE-17075 > URL: https://issues.apache.org/jira/browse/HBASE-17075 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.1.8 >Reporter: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 1.1.8 > > Attachments: HBASE-17075.001.patch > > > For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Y. SREENIVASULU REDDY updated HBASE-17075: -- Status: Patch Available (was: Open) > Old regionStates are not clearing for truncated tables > --- > > Key: HBASE-17075 > URL: https://issues.apache.org/jira/browse/HBASE-17075 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.1.8 >Reporter: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 1.1.8 > > Attachments: HBASE-17075.001.patch > > > For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657770#comment-15657770 ] Ted Yu commented on HBASE-17075: Alignment needs to be adjusted for finally block. Name your patch with proper branch name. > Old regionStates are not clearing for truncated tables > --- > > Key: HBASE-17075 > URL: https://issues.apache.org/jira/browse/HBASE-17075 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.1.8 >Reporter: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 1.1.8 > > Attachments: HBASE-17075.001.patch > > > For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Y. SREENIVASULU REDDY updated HBASE-17075: -- Attachment: HBASE-17075.001.patch simple patch attached. > Old regionStates are not clearing for truncated tables > --- > > Key: HBASE-17075 > URL: https://issues.apache.org/jira/browse/HBASE-17075 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.1.8 >Reporter: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 1.1.8 > > Attachments: HBASE-17075.001.patch > > > For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2
[ https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657749#comment-15657749 ] Vladimir Rodionov commented on HBASE-14123: --- [~saint@gmail.com], you can try v37. > HBase Backup/Restore Phase 2 > > > Key: HBASE-14123 > URL: https://issues.apache.org/jira/browse/HBASE-14123 > Project: HBase > Issue Type: Umbrella >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: 14123-master.v14.txt, 14123-master.v15.txt, > 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, > 14123-master.v19.txt, 14123-master.v2.txt, 14123-master.v20.txt, > 14123-master.v21.txt, 14123-master.v24.txt, 14123-master.v25.txt, > 14123-master.v27.txt, 14123-master.v28.txt, 14123-master.v29.full.txt, > 14123-master.v3.txt, 14123-master.v30.txt, 14123-master.v31.txt, > 14123-master.v32.txt, 14123-master.v33.txt, 14123-master.v34.txt, > 14123-master.v35.txt, 14123-master.v36.txt, 14123-master.v37.txt, > 14123-master.v5.txt, 14123-master.v6.txt, 14123-master.v7.txt, > 14123-master.v8.txt, 14123-master.v9.txt, 14123-v14.txt, > HBASE-14123-for-7912-v1.patch, HBASE-14123-for-7912-v6.patch, > HBASE-14123-v1.patch, HBASE-14123-v10.patch, HBASE-14123-v11.patch, > HBASE-14123-v12.patch, HBASE-14123-v13.patch, HBASE-14123-v15.patch, > HBASE-14123-v16.patch, HBASE-14123-v2.patch, HBASE-14123-v3.patch, > HBASE-14123-v4.patch, HBASE-14123-v5.patch, HBASE-14123-v6.patch, > HBASE-14123-v7.patch, HBASE-14123-v9.patch > > > Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-12433) Coprocessors not dynamically reordered when reset priority
[ https://issues.apache.org/jira/browse/HBASE-12433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-12433. --- Resolution: Not A Bug > Coprocessors not dynamically reordered when reset priority > -- > > Key: HBASE-12433 > URL: https://issues.apache.org/jira/browse/HBASE-12433 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 0.98.7 >Reporter: James Taylor > > When modifying the coprocessor priority through the HBase shell, the order of > the firing of the coprocessors wasn't changing. It probably would have with a > cluster bounce, but if we can make it dynamic easily, that would be > preferable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-12570) Improve table configuration sanity checking
[ https://issues.apache.org/jira/browse/HBASE-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-12570. --- Resolution: Duplicate > Improve table configuration sanity checking > --- > > Key: HBASE-12570 > URL: https://issues.apache.org/jira/browse/HBASE-12570 > Project: HBase > Issue Type: Umbrella >Reporter: James Taylor > > See PHOENIX-1473. If a split policy class cannot be resolved, then your HBase > cluster will be brought down as each region server that successively attempts > to open the region will not find the class and will bring itself down. > One idea to prevent this would be to fail the CREATE TABLE or ALTER TABLE > admin call if the split policy class cannot be found. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14123) HBase Backup/Restore Phase 2
[ https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-14123: -- Attachment: 14123-master.v37.txt v37. Rebase to master > HBase Backup/Restore Phase 2 > > > Key: HBASE-14123 > URL: https://issues.apache.org/jira/browse/HBASE-14123 > Project: HBase > Issue Type: Umbrella >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: 14123-master.v14.txt, 14123-master.v15.txt, > 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, > 14123-master.v19.txt, 14123-master.v2.txt, 14123-master.v20.txt, > 14123-master.v21.txt, 14123-master.v24.txt, 14123-master.v25.txt, > 14123-master.v27.txt, 14123-master.v28.txt, 14123-master.v29.full.txt, > 14123-master.v3.txt, 14123-master.v30.txt, 14123-master.v31.txt, > 14123-master.v32.txt, 14123-master.v33.txt, 14123-master.v34.txt, > 14123-master.v35.txt, 14123-master.v36.txt, 14123-master.v37.txt, > 14123-master.v5.txt, 14123-master.v6.txt, 14123-master.v7.txt, > 14123-master.v8.txt, 14123-master.v9.txt, 14123-v14.txt, > HBASE-14123-for-7912-v1.patch, HBASE-14123-for-7912-v6.patch, > HBASE-14123-v1.patch, HBASE-14123-v10.patch, HBASE-14123-v11.patch, > HBASE-14123-v12.patch, HBASE-14123-v13.patch, HBASE-14123-v15.patch, > HBASE-14123-v16.patch, HBASE-14123-v2.patch, HBASE-14123-v3.patch, > HBASE-14123-v4.patch, HBASE-14123-v5.patch, HBASE-14123-v6.patch, > HBASE-14123-v7.patch, HBASE-14123-v9.patch > > > Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-2256) Delete row, followed quickly to put of the same row will sometimes fail.
[ https://issues.apache.org/jira/browse/HBASE-2256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657608#comment-15657608 ] stack commented on HBASE-2256: -- Please open a new issue [~gfeng] This one is a long time closed. World has changed a bunch since this issue too so probably different cause. Thanks. > Delete row, followed quickly to put of the same row will sometimes fail. > > > Key: HBASE-2256 > URL: https://issues.apache.org/jira/browse/HBASE-2256 > Project: HBase > Issue Type: Bug >Affects Versions: 0.20.3 >Reporter: Clint Morgan > Attachments: hbase-2256.patch > > > Doing a Delete of a whole row, followed immediately by a put to that row will > sometimes miss a cell. Attached is a test to provoke the issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17074) PreCommit job always fails because of OOM
[ https://issues.apache.org/jira/browse/HBASE-17074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657597#comment-15657597 ] stack commented on HBASE-17074: --- Infra has been doing a bunch of messing behind the scenes. Will take a look... > PreCommit job always fails because of OOM > - > > Key: HBASE-17074 > URL: https://issues.apache.org/jira/browse/HBASE-17074 > Project: HBase > Issue Type: Bug > Components: build >Reporter: Duo Zhang >Priority: Critical > > https://builds.apache.org/job/PreCommit-HBASE-Build/4434/artifact/patchprocess/patch-unit-hbase-server.txt > {noformat} > Exception in thread "Thread-2369" java.lang.OutOfMemoryError: Java heap space > at java.util.Arrays.copyOf(Arrays.java:3332) > at > java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) > at > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:596) > at java.lang.StringBuffer.append(StringBuffer.java:367) > at java.io.BufferedReader.readLine(BufferedReader.java:370) > at java.io.BufferedReader.readLine(BufferedReader.java:389) > at > org.apache.maven.surefire.shade.org.apache.maven.shared.utils.cli.StreamPumper.run(StreamPumper.java:66) > Exception in thread "Thread-2357" java.lang.OutOfMemoryError: Java heap space > Exception in thread "Thread-2365" java.lang.OutOfMemoryError: Java heap space > Running org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd > Running org.apache.hadoop.hbase.filter.TestFilterListOrOperatorWithBlkCnt > Exception in thread "Thread-2383" java.lang.OutOfMemoryError: Java heap space > Exception in thread "Thread-2397" java.lang.OutOfMemoryError: Java heap space > Exception in thread "Thread-2401" java.lang.OutOfMemoryError: Java heap space > Running org.apache.hadoop.hbase.TestHBaseTestingUtility > Exception in thread "Thread-2407" java.lang.OutOfMemoryError: Java heap space > Exception in thread "Thread-2411" java.lang.OutOfMemoryError: Java heap space > Exception in thread "Thread-2413" java.lang.OutOfMemoryError: Java heap space > {noformat} > The OOM happens in the surefire plugin when reading the stdout or stderr of > the running test... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16838) Implement basic scan
[ https://issues.apache.org/jira/browse/HBASE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657589#comment-15657589 ] stack commented on HBASE-16838: --- All sounds good to me (and sounds fine doing as follow ons). The issue raised by [~carp84] that this work is not fit for other than advanced users is a good point. Your suggestion of renaming AsyncTable RawAsyncTable or UnsafeAsyncTable is warranted. We have to be careful what we expose as our default API. Need to protect folks from shooting themselves in the foot. They'll just get pissed off. Good stuff [~Apache9] > Implement basic scan > > > Key: HBASE-16838 > URL: https://issues.apache.org/jira/browse/HBASE-16838 > Project: HBase > Issue Type: Sub-task > Components: asyncclient, Client >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-16838-v1.patch, HBASE-16838-v2.patch, > HBASE-16838-v3.patch, HBASE-16838-v4.patch, HBASE-16838.patch > > > Implement a scan works like the grpc streaming call that all returned results > will be passed to a ScanConsumer. The methods of the consumer will be called > directly in the rpc framework threads so it is not allowed to do time > consuming work in the methods. So in general only experts or the > implementation of other methods in AsyncTable can call this method directly, > that's why I call it 'basic scan'. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17058) Lower epsilon used for jitter verification from HBASE-15324
[ https://issues.apache.org/jira/browse/HBASE-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657581#comment-15657581 ] Esteban Gutierrez commented on HBASE-17058: --- No unit tests needed, existing unit test from HBASE-15324 is sufficient. > Lower epsilon used for jitter verification from HBASE-15324 > --- > > Key: HBASE-17058 > URL: https://issues.apache.org/jira/browse/HBASE-17058 > Project: HBase > Issue Type: Bug > Components: Compaction >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez > Attachments: HBASE-17058.master.001.patch > > > The current epsilon used is 1E-6 and its too big it might overflow the > desiredMaxFileSize. A trivial fix is to lower the epsilon to 2^-52 or even > 2^-53. An option to consider too is just to shift the jitter to always > decrement hbase.hregion.max.filesize (MAX_FILESIZE) instead of increase the > size of the region and having to deal with the round off. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17020) keylen in midkey() dont computed correctly
[ https://issues.apache.org/jira/browse/HBASE-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657573#comment-15657573 ] Hudson commented on HBASE-17020: FAILURE: Integrated in Jenkins build HBase-0.98-on-Hadoop-1.1 #1287 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1287/]) HBASE-17020 keylen in midkey() dont computed correctly (liyu: rev cf2cb620e6167c079add7b83efbcea1bed8dd7d3) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java > keylen in midkey() dont computed correctly > -- > > Key: HBASE-17020 > URL: https://issues.apache.org/jira/browse/HBASE-17020 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4 >Reporter: Yu Sun >Assignee: Yu Sun > Fix For: 2.0.0, 1.4.0, 1.2.5, 0.98.24, 1.1.8 > > Attachments: HBASE-17020-branch-0.98.patch, HBASE-17020-v1.patch, > HBASE-17020-v2.patch, HBASE-17020-v2.patch, HBASE-17020-v3-branch1.1.patch, > HBASE-17020.branch-0.98.patch, HBASE-17020.branch-0.98.patch, > HBASE-17020.branch-1.1.patch > > > in CellBasedKeyBlockIndexReader.midkey(): > {code} > ByteBuff b = midLeafBlock.getBufferWithoutHeader(); > int numDataBlocks = b.getIntAfterPosition(0); > int keyRelOffset = b.getIntAfterPosition(Bytes.SIZEOF_INT * > (midKeyEntry + 1)); > int keyLen = b.getIntAfterPosition(Bytes.SIZEOF_INT * (midKeyEntry > + 2)) - keyRelOffset; > {code} > the local varible keyLen get this should be total length of: > SECONDARY_INDEX_ENTRY_OVERHEAD + firstKey.length; > the code is: > {code} > void add(byte[] firstKey, long blockOffset, int onDiskDataSize, > long curTotalNumSubEntries) { > // Record the offset for the secondary index > secondaryIndexOffsetMarks.add(curTotalNonRootEntrySize); > curTotalNonRootEntrySize += SECONDARY_INDEX_ENTRY_OVERHEAD > + firstKey.length; > {code} > when the midkey last entry of a leaf-level index block, this may throw: > {quote} > 2016-10-01 12:27:55,186 ERROR [MemStoreFlusher.0] > regionserver.MemStoreFlusher: Cache flusher failed for entry [flush region > pora_6_item_feature,0061:,1473838922457.12617bc4ebbfd171018bf96ac9bdd2a7.] > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray(ByteBufferUtils.java:936) > at > org.apache.hadoop.hbase.nio.SingleByteBuff.toBytes(SingleByteBuff.java:303) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.midkey(HFileBlockIndex.java:419) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.midkey(HFileReaderImpl.java:1519) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Reader.midkey(StoreFile.java:1520) > at > org.apache.hadoop.hbase.regionserver.StoreFile.getFileSplitPoint(StoreFile.java:706) > at > org.apache.hadoop.hbase.regionserver.DefaultStoreFileManager.getSplitPoint(DefaultStoreFileManager.java:126) > at > org.apache.hadoop.hbase.regionserver.HStore.getSplitPoint(HStore.java:1983) > at > org.apache.hadoop.hbase.regionserver.ConstantFamilySizeRegionSplitPolicy.getSplitPoint(ConstantFamilySizeRegionSplitPolicy.java:77) > at > org.apache.hadoop.hbase.regionserver.HRegion.checkSplit(HRegion.java:7756) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:513) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259) > at java.lang.Thread.run(Thread.java:756) > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16565) Add metrics for backup / restore
[ https://issues.apache.org/jira/browse/HBASE-16565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-16565: --- Description: Exposing metrics for backup / restore would give admin insight on the overall operations. The metrics should include (but are not limited to): * number of backups performed (full / incremental) * number of restore's performed (full / incremental) * number of aborted backups * number of aborted restore's * duration of backups performed * duration of restores performed was: Exposing metrics for backup / restore would give admin insight on the overall operations. The metrics should include (but are not limited to): * number of backups performed (full / incremental) * number of restore's performed (full / incremental) * number of aborted backups * number of aborted restore's > Add metrics for backup / restore > > > Key: HBASE-16565 > URL: https://issues.apache.org/jira/browse/HBASE-16565 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu > Labels: backup, metrics > > Exposing metrics for backup / restore would give admin insight on the overall > operations. > The metrics should include (but are not limited to): > * number of backups performed (full / incremental) > * number of restore's performed (full / incremental) > * number of aborted backups > * number of aborted restore's > * duration of backups performed > * duration of restores performed -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17047) Add an API to get HBase connection cache statistics
[ https://issues.apache.org/jira/browse/HBASE-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657548#comment-15657548 ] Weiqing Yang commented on HBASE-17047: -- Thanks for the review. [~tedyu] > Add an API to get HBase connection cache statistics > --- > > Key: HBASE-17047 > URL: https://issues.apache.org/jira/browse/HBASE-17047 > Project: HBase > Issue Type: Improvement > Components: spark >Reporter: Weiqing Yang >Assignee: Weiqing Yang >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-17047_v1.patch, HBASE-17047_v2.patch > > > This patch will add a function "getStat" for the user to get the statistics > of the HBase connection cache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17062) RegionSplitter throws ClassCastException
[ https://issues.apache.org/jira/browse/HBASE-17062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-17062: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 1.4.0 2.0.0 Status: Resolved (was: Patch Available) Thanks for the patch, Jeongdae > RegionSplitter throws ClassCastException > > > Key: HBASE-17062 > URL: https://issues.apache.org/jira/browse/HBASE-17062 > Project: HBase > Issue Type: Bug > Components: util >Reporter: Jeongdae Kim >Assignee: Jeongdae Kim >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17062.001.patch, HBASE-17062.002.patch, > HBASE-17062.003.patch > > > RegionSplitter throws Exception as below. > Exception in thread "main" java.lang.ClassCastException: > org.apache.hadoop.hbase.ServerName cannot be cast to java.lang.String > at java.lang.String.compareTo(String.java:108) > at java.util.TreeMap.getEntry(TreeMap.java:346) > at java.util.TreeMap.get(TreeMap.java:273) > at > org.apache.hadoop.hbase.util.RegionSplitter$1.compare(RegionSplitter.java:504) > at > org.apache.hadoop.hbase.util.RegionSplitter$1.compare(RegionSplitter.java:502) > at java.util.TimSort.countRunAndMakeAscending(TimSort.java:324) > at java.util.TimSort.sort(TimSort.java:189) > at java.util.TimSort.sort(TimSort.java:173) > at java.util.Arrays.sort(Arrays.java:659) > at java.util.Collections.sort(Collections.java:217) > at > org.apache.hadoop.hbase.util.RegionSplitter.rollingSplit(RegionSplitter.java:502) > at > org.apache.hadoop.hbase.util.RegionSplitter.main(RegionSplitter.java:367) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-17062) RegionSplitter throws ClassCastException
[ https://issues.apache.org/jira/browse/HBASE-17062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-17062: -- Assignee: Jeongdae Kim > RegionSplitter throws ClassCastException > > > Key: HBASE-17062 > URL: https://issues.apache.org/jira/browse/HBASE-17062 > Project: HBase > Issue Type: Bug > Components: util >Reporter: Jeongdae Kim >Assignee: Jeongdae Kim >Priority: Minor > Attachments: HBASE-17062.001.patch, HBASE-17062.002.patch, > HBASE-17062.003.patch > > > RegionSplitter throws Exception as below. > Exception in thread "main" java.lang.ClassCastException: > org.apache.hadoop.hbase.ServerName cannot be cast to java.lang.String > at java.lang.String.compareTo(String.java:108) > at java.util.TreeMap.getEntry(TreeMap.java:346) > at java.util.TreeMap.get(TreeMap.java:273) > at > org.apache.hadoop.hbase.util.RegionSplitter$1.compare(RegionSplitter.java:504) > at > org.apache.hadoop.hbase.util.RegionSplitter$1.compare(RegionSplitter.java:502) > at java.util.TimSort.countRunAndMakeAscending(TimSort.java:324) > at java.util.TimSort.sort(TimSort.java:189) > at java.util.TimSort.sort(TimSort.java:173) > at java.util.Arrays.sort(Arrays.java:659) > at java.util.Collections.sort(Collections.java:217) > at > org.apache.hadoop.hbase.util.RegionSplitter.rollingSplit(RegionSplitter.java:502) > at > org.apache.hadoop.hbase.util.RegionSplitter.main(RegionSplitter.java:367) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17075) Old regionStates are not clearing for truncated tables
[ https://issues.apache.org/jira/browse/HBASE-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657322#comment-15657322 ] Y. SREENIVASULU REDDY commented on HBASE-17075: --- This problem is exits in 1.0 and 1.1 versions. > Old regionStates are not clearing for truncated tables > --- > > Key: HBASE-17075 > URL: https://issues.apache.org/jira/browse/HBASE-17075 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.1.8 >Reporter: Y. SREENIVASULU REDDY >Priority: Minor > Fix For: 1.1.8 > > > For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17075) Old regionStates are not clearing for truncated tables
Y. SREENIVASULU REDDY created HBASE-17075: - Summary: Old regionStates are not clearing for truncated tables Key: HBASE-17075 URL: https://issues.apache.org/jira/browse/HBASE-17075 Project: HBase Issue Type: Bug Components: Region Assignment Affects Versions: 1.1.8 Reporter: Y. SREENIVASULU REDDY Priority: Minor Fix For: 1.1.8 For truncated tables, not clearing the region states from master in-memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17047) Add an API to get HBase connection cache statistics
[ https://issues.apache.org/jira/browse/HBASE-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-17047: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.0 Status: Resolved (was: Patch Available) > Add an API to get HBase connection cache statistics > --- > > Key: HBASE-17047 > URL: https://issues.apache.org/jira/browse/HBASE-17047 > Project: HBase > Issue Type: Improvement > Components: spark >Reporter: Weiqing Yang >Assignee: Weiqing Yang >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-17047_v1.patch, HBASE-17047_v2.patch > > > This patch will add a function "getStat" for the user to get the statistics > of the HBase connection cache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17072) CPU usage starts to climb up to 90-100% when using G1GC
[ https://issues.apache.org/jira/browse/HBASE-17072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657155#comment-15657155 ] Eiichi Sato commented on HBASE-17072: - The problem is called [primary clustering|https://en.wikipedia.org/wiki/Primary_clustering]. Actually, {{java.lang.ThreadLocal}} provided by Oracle JDK is implemented using open addressing based hash table with linear probing which, as described in the link, is known to be vulnerable to this phenomenon. On the other hand {{java.util.HashMap}} is backed by chained hash tables, which fails gradually to hash collisions. This is the reason why I think we shouldn't use an indefinite number of ThreadLocal instances. > CPU usage starts to climb up to 90-100% when using G1GC > --- > > Key: HBASE-17072 > URL: https://issues.apache.org/jira/browse/HBASE-17072 > Project: HBase > Issue Type: Bug > Components: Performance, regionserver >Affects Versions: 1.0.0, 1.2.0 >Reporter: Eiichi Sato > Attachments: disable-block-header-cache.patch, mat-threadlocals.png, > mat-threads.png, metrics.png, slave1.svg, slave2.svg, slave3.svg, slave4.svg > > > h5. Problem > CPU usage of a region server in our CDH 5.4.5 cluster, at some point, starts > to gradually get higher up to nearly 90-100% when using G1GC. We've also run > into this problem on CDH 5.7.3 and CDH 5.8.2. > In our production cluster, it normally takes a few weeks for this to happen > after restarting a RS. We reproduced this on our test cluster and attached > the results. Please note that, to make it easy to reproduce, we did some > "anti-tuning" on a table when running tests. > In metrics.png, soon after we started running some workloads against a test > cluster (CDH 5.8.2) at about 7 p.m. CPU usage of the two RSs started to rise. > Flame Graphs (slave1.svg to slave4.svg) are generated from jstack dumps of > each RS process around 10:30 a.m. the next day. > After investigating heapdumps from another occurrence on a test cluster > running CDH 5.7.3, we found that the ThreadLocalMap contain a lot of > contiguous entries of {{HFileBlock$PrefetchedHeader}} probably due to primary > clustering. This caused more loops in > {{ThreadLocalMap#expungeStaleEntries()}}, consuming a certain amount of CPU > time. What is worse is that the method is called from RPC metrics code, > which means even a small amount of per-RPC time soon adds up to a huge amount > of CPU time. > This is very similar to the issue in HBASE-16616, but we have many > {{HFileBlock$PrefetchedHeader}} not only {{Counter$IndexHolder}} instances. > Here are some OQL counts from Eclipse Memory Analyzer (MAT). This shows a > number of ThreadLocal instances in the ThreadLocalMap of a single handler > thread. > {code} > SELECT * > FROM OBJECTS (SELECT AS RETAINED SET OBJECTS value > FROM OBJECTS 0x4ee380430) obj > WHERE obj.@clazz.@name = > "org.apache.hadoop.hbase.io.hfile.HFileBlock$PrefetchedHeader" > #=> 10980 instances > {code} > {code} > SELECT * > FROM OBJECTS (SELECT AS RETAINED SET OBJECTS value > FROM OBJECTS 0x4ee380430) obj > WHERE obj.@clazz.@name = "org.apache.hadoop.hbase.util.Counter$IndexHolder" > #=> 2052 instances > {code} > Although as described in HBASE-16616 this somewhat seems to be an issue in > G1GC side regarding weakly-reachable objects, we should keep ThreadLocal > usage minimal and avoid creating an indefinite number (in this case, a number > of HFiles) of ThreadLocal instances. > HBASE-16146 removes ThreadLocals from the RPC metrics code. That may solve > the issue (I just saw the patch, never tested it at all), but the > {{HFileBlock$PrefetchedHeader}} are still there in the ThreadLocalMap, which > may cause issues in the future again. > h5. Our Solution > We simply removed the whole {{HFileBlock$PrefetchedHeader}} caching and > fortunately we didn't notice any performance degradation for our production > workloads. > Because the PrefetchedHeader caching uses ThreadLocal and because RPCs are > handled randomly in any of the handlers, small Get or small Scan RPCs do not > benefit from the caching (See HBASE-10676 and HBASE-11402 for the details). > Probably, we need to see how well reads are saved by the caching for large > Scan or Get RPCs and especially for compactions if we really remove the > caching. It's probably better if we can remove ThreadLocals without breaking > the current caching behavior. > FWIW, I'm attaching the patch we applied. It's for CDH 5.4.5. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17072) CPU usage starts to climb up to 90-100% when using G1GC
[ https://issues.apache.org/jira/browse/HBASE-17072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657123#comment-15657123 ] Eiichi Sato commented on HBASE-17072: - Thanks for the suggestion. I wasn't aware of HBASE-17017. However, the problem I want to talk here is not about them being expensive themselves. They are interfered by thousands of {{ThreadLocal}} in the ThreadLocalMap and eventually that will make innocent ThreadLocal users (in this case, counters and histograms, prior to HBASE-16146) waste too much CPU time. > CPU usage starts to climb up to 90-100% when using G1GC > --- > > Key: HBASE-17072 > URL: https://issues.apache.org/jira/browse/HBASE-17072 > Project: HBase > Issue Type: Bug > Components: Performance, regionserver >Affects Versions: 1.0.0, 1.2.0 >Reporter: Eiichi Sato > Attachments: disable-block-header-cache.patch, mat-threadlocals.png, > mat-threads.png, metrics.png, slave1.svg, slave2.svg, slave3.svg, slave4.svg > > > h5. Problem > CPU usage of a region server in our CDH 5.4.5 cluster, at some point, starts > to gradually get higher up to nearly 90-100% when using G1GC. We've also run > into this problem on CDH 5.7.3 and CDH 5.8.2. > In our production cluster, it normally takes a few weeks for this to happen > after restarting a RS. We reproduced this on our test cluster and attached > the results. Please note that, to make it easy to reproduce, we did some > "anti-tuning" on a table when running tests. > In metrics.png, soon after we started running some workloads against a test > cluster (CDH 5.8.2) at about 7 p.m. CPU usage of the two RSs started to rise. > Flame Graphs (slave1.svg to slave4.svg) are generated from jstack dumps of > each RS process around 10:30 a.m. the next day. > After investigating heapdumps from another occurrence on a test cluster > running CDH 5.7.3, we found that the ThreadLocalMap contain a lot of > contiguous entries of {{HFileBlock$PrefetchedHeader}} probably due to primary > clustering. This caused more loops in > {{ThreadLocalMap#expungeStaleEntries()}}, consuming a certain amount of CPU > time. What is worse is that the method is called from RPC metrics code, > which means even a small amount of per-RPC time soon adds up to a huge amount > of CPU time. > This is very similar to the issue in HBASE-16616, but we have many > {{HFileBlock$PrefetchedHeader}} not only {{Counter$IndexHolder}} instances. > Here are some OQL counts from Eclipse Memory Analyzer (MAT). This shows a > number of ThreadLocal instances in the ThreadLocalMap of a single handler > thread. > {code} > SELECT * > FROM OBJECTS (SELECT AS RETAINED SET OBJECTS value > FROM OBJECTS 0x4ee380430) obj > WHERE obj.@clazz.@name = > "org.apache.hadoop.hbase.io.hfile.HFileBlock$PrefetchedHeader" > #=> 10980 instances > {code} > {code} > SELECT * > FROM OBJECTS (SELECT AS RETAINED SET OBJECTS value > FROM OBJECTS 0x4ee380430) obj > WHERE obj.@clazz.@name = "org.apache.hadoop.hbase.util.Counter$IndexHolder" > #=> 2052 instances > {code} > Although as described in HBASE-16616 this somewhat seems to be an issue in > G1GC side regarding weakly-reachable objects, we should keep ThreadLocal > usage minimal and avoid creating an indefinite number (in this case, a number > of HFiles) of ThreadLocal instances. > HBASE-16146 removes ThreadLocals from the RPC metrics code. That may solve > the issue (I just saw the patch, never tested it at all), but the > {{HFileBlock$PrefetchedHeader}} are still there in the ThreadLocalMap, which > may cause issues in the future again. > h5. Our Solution > We simply removed the whole {{HFileBlock$PrefetchedHeader}} caching and > fortunately we didn't notice any performance degradation for our production > workloads. > Because the PrefetchedHeader caching uses ThreadLocal and because RPCs are > handled randomly in any of the handlers, small Get or small Scan RPCs do not > benefit from the caching (See HBASE-10676 and HBASE-11402 for the details). > Probably, we need to see how well reads are saved by the caching for large > Scan or Get RPCs and especially for compactions if we really remove the > caching. It's probably better if we can remove ThreadLocals without breaking > the current caching behavior. > FWIW, I'm attaching the patch we applied. It's for CDH 5.4.5. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16838) Implement basic scan
[ https://issues.apache.org/jira/browse/HBASE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-16838: -- Component/s: Client asyncclient > Implement basic scan > > > Key: HBASE-16838 > URL: https://issues.apache.org/jira/browse/HBASE-16838 > Project: HBase > Issue Type: Sub-task > Components: asyncclient, Client >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-16838-v1.patch, HBASE-16838-v2.patch, > HBASE-16838-v3.patch, HBASE-16838-v4.patch, HBASE-16838.patch > > > Implement a scan works like the grpc streaming call that all returned results > will be passed to a ScanConsumer. The methods of the consumer will be called > directly in the rpc framework threads so it is not allowed to do time > consuming work in the methods. So in general only experts or the > implementation of other methods in AsyncTable can call this method directly, > that's why I call it 'basic scan'. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16838) Implement basic scan
[ https://issues.apache.org/jira/browse/HBASE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-16838: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Pushed to master. Thanks all for reviewing. > Implement basic scan > > > Key: HBASE-16838 > URL: https://issues.apache.org/jira/browse/HBASE-16838 > Project: HBase > Issue Type: Sub-task > Components: asyncclient, Client >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-16838-v1.patch, HBASE-16838-v2.patch, > HBASE-16838-v3.patch, HBASE-16838-v4.patch, HBASE-16838.patch > > > Implement a scan works like the grpc streaming call that all returned results > will be passed to a ScanConsumer. The methods of the consumer will be called > directly in the rpc framework threads so it is not allowed to do time > consuming work in the methods. So in general only experts or the > implementation of other methods in AsyncTable can call this method directly, > that's why I call it 'basic scan'. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16838) Implement basic scan
[ https://issues.apache.org/jira/browse/HBASE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656987#comment-15656987 ] Duo Zhang commented on HBASE-16838: --- Filed HBASE-17074 to address the OOM problem of pre commit job. Will commit shortly. > Implement basic scan > > > Key: HBASE-16838 > URL: https://issues.apache.org/jira/browse/HBASE-16838 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-16838-v1.patch, HBASE-16838-v2.patch, > HBASE-16838-v3.patch, HBASE-16838-v4.patch, HBASE-16838.patch > > > Implement a scan works like the grpc streaming call that all returned results > will be passed to a ScanConsumer. The methods of the consumer will be called > directly in the rpc framework threads so it is not allowed to do time > consuming work in the methods. So in general only experts or the > implementation of other methods in AsyncTable can call this method directly, > that's why I call it 'basic scan'. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17074) PreCommit job always fails because of OOM
Duo Zhang created HBASE-17074: - Summary: PreCommit job always fails because of OOM Key: HBASE-17074 URL: https://issues.apache.org/jira/browse/HBASE-17074 Project: HBase Issue Type: Bug Components: build Reporter: Duo Zhang Priority: Critical https://builds.apache.org/job/PreCommit-HBASE-Build/4434/artifact/patchprocess/patch-unit-hbase-server.txt {noformat} Exception in thread "Thread-2369" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:3332) at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:596) at java.lang.StringBuffer.append(StringBuffer.java:367) at java.io.BufferedReader.readLine(BufferedReader.java:370) at java.io.BufferedReader.readLine(BufferedReader.java:389) at org.apache.maven.surefire.shade.org.apache.maven.shared.utils.cli.StreamPumper.run(StreamPumper.java:66) Exception in thread "Thread-2357" java.lang.OutOfMemoryError: Java heap space Exception in thread "Thread-2365" java.lang.OutOfMemoryError: Java heap space Running org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd Running org.apache.hadoop.hbase.filter.TestFilterListOrOperatorWithBlkCnt Exception in thread "Thread-2383" java.lang.OutOfMemoryError: Java heap space Exception in thread "Thread-2397" java.lang.OutOfMemoryError: Java heap space Exception in thread "Thread-2401" java.lang.OutOfMemoryError: Java heap space Running org.apache.hadoop.hbase.TestHBaseTestingUtility Exception in thread "Thread-2407" java.lang.OutOfMemoryError: Java heap space Exception in thread "Thread-2411" java.lang.OutOfMemoryError: Java heap space Exception in thread "Thread-2413" java.lang.OutOfMemoryError: Java heap space {noformat} The OOM happens in the surefire plugin when reading the stdout or stderr of the running test... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17058) Lower epsilon used for jitter verification from HBASE-15324
[ https://issues.apache.org/jira/browse/HBASE-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656934#comment-15656934 ] Hadoop QA commented on HBASE-17058: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 8s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 29m 23s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 12s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 136m 19s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.master.TestGetLastFlushedSequenceId | | | org.apache.hadoop.hbase.client.TestMetaWithReplicas | | | org.apache.hadoop.hbase.master.TestMasterWalManager | | | org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor | | | org.apache.hadoop.hbase.client.TestHCM | | | org.apache.hadoop.hbase.client.TestTableSnapshotScanner | | | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838531/HBASE-17058.master.001.patch | | JIRA Issue | HBASE-17058 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 0cc04148cf6a 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / f9c6b66 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/4435/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs |
[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same
[ https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656848#comment-15656848 ] ramkrishna.s.vasudevan commented on HBASE-16890: Sorry. Even FSHLog does not call hsync(), it only calls hflush(). So we are not sure that the data is persisted to DN but we only ensure that new readers can read the data. > Analyze the performance of AsyncWAL and fix the same > > > Key: HBASE-16890 > URL: https://issues.apache.org/jira/browse/HBASE-16890 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 > (2).patch, AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, > AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, > HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, > HBASE-16890-remove-contention-v1.patch, HBASE-16890-remove-contention.patch, > Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 > PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at > 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png, async.svg, > classic.svg, contention.png, contention_defaultWAL.png > > > Tests reveal that AsyncWAL under load in single node cluster performs slower > than the Default WAL. This task is to analyze and see if we could fix it. > See some discussions in the tail of JIRA HBASE-15536. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17020) keylen in midkey() dont computed correctly
[ https://issues.apache.org/jira/browse/HBASE-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656841#comment-15656841 ] Hudson commented on HBASE-17020: ABORTED: Integrated in Jenkins build HBase-1.1-JDK7 #1817 (See [https://builds.apache.org/job/HBase-1.1-JDK7/1817/]) HBASE-17020 keylen in midkey() dont computed correctly (liyu: rev 23e168d0b4aba2307ec9735da1cdeb8569ed9f63) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java > keylen in midkey() dont computed correctly > -- > > Key: HBASE-17020 > URL: https://issues.apache.org/jira/browse/HBASE-17020 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4 >Reporter: Yu Sun >Assignee: Yu Sun > Fix For: 2.0.0, 1.4.0, 1.2.5, 0.98.24, 1.1.8 > > Attachments: HBASE-17020-branch-0.98.patch, HBASE-17020-v1.patch, > HBASE-17020-v2.patch, HBASE-17020-v2.patch, HBASE-17020-v3-branch1.1.patch, > HBASE-17020.branch-0.98.patch, HBASE-17020.branch-0.98.patch, > HBASE-17020.branch-1.1.patch > > > in CellBasedKeyBlockIndexReader.midkey(): > {code} > ByteBuff b = midLeafBlock.getBufferWithoutHeader(); > int numDataBlocks = b.getIntAfterPosition(0); > int keyRelOffset = b.getIntAfterPosition(Bytes.SIZEOF_INT * > (midKeyEntry + 1)); > int keyLen = b.getIntAfterPosition(Bytes.SIZEOF_INT * (midKeyEntry > + 2)) - keyRelOffset; > {code} > the local varible keyLen get this should be total length of: > SECONDARY_INDEX_ENTRY_OVERHEAD + firstKey.length; > the code is: > {code} > void add(byte[] firstKey, long blockOffset, int onDiskDataSize, > long curTotalNumSubEntries) { > // Record the offset for the secondary index > secondaryIndexOffsetMarks.add(curTotalNonRootEntrySize); > curTotalNonRootEntrySize += SECONDARY_INDEX_ENTRY_OVERHEAD > + firstKey.length; > {code} > when the midkey last entry of a leaf-level index block, this may throw: > {quote} > 2016-10-01 12:27:55,186 ERROR [MemStoreFlusher.0] > regionserver.MemStoreFlusher: Cache flusher failed for entry [flush region > pora_6_item_feature,0061:,1473838922457.12617bc4ebbfd171018bf96ac9bdd2a7.] > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray(ByteBufferUtils.java:936) > at > org.apache.hadoop.hbase.nio.SingleByteBuff.toBytes(SingleByteBuff.java:303) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.midkey(HFileBlockIndex.java:419) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.midkey(HFileReaderImpl.java:1519) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Reader.midkey(StoreFile.java:1520) > at > org.apache.hadoop.hbase.regionserver.StoreFile.getFileSplitPoint(StoreFile.java:706) > at > org.apache.hadoop.hbase.regionserver.DefaultStoreFileManager.getSplitPoint(DefaultStoreFileManager.java:126) > at > org.apache.hadoop.hbase.regionserver.HStore.getSplitPoint(HStore.java:1983) > at > org.apache.hadoop.hbase.regionserver.ConstantFamilySizeRegionSplitPolicy.getSplitPoint(ConstantFamilySizeRegionSplitPolicy.java:77) > at > org.apache.hadoop.hbase.regionserver.HRegion.checkSplit(HRegion.java:7756) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:513) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259) > at java.lang.Thread.run(Thread.java:756) > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17071) Do not initialize MemstoreChunkPool when use mslab option is turned off
[ https://issues.apache.org/jira/browse/HBASE-17071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-17071: --- Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to master. Thanks for the review Yu Li, Stack. > Do not initialize MemstoreChunkPool when use mslab option is turned off > --- > > Key: HBASE-17071 > URL: https://issues.apache.org/jira/browse/HBASE-17071 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-17071.patch > > > This is a 2.0 only issue and induced by HBASE-16407. > We are initializing MSLAB chunk pool along with RS start itself now. (To pass > it as a HeapMemoryTuneObserver). > When MSLAB is turned off (ie. hbase.hregion.memstore.mslab.enabled is > configured false) we should not be initializing MSLAB chunk pool at all. By > default the initial chunk count to be created will be 0 only. Still better > to avoid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16838) Implement basic scan
[ https://issues.apache.org/jira/browse/HBASE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656786#comment-15656786 ] Hadoop QA commented on HBASE-16838: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 5s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 25s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 30s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 15s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 126m 39s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.TestHBaseTestingUtility | | | org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService | | | org.apache.hadoop.hbase.filter.TestFilterListOrOperatorWithBlkCnt | | | org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes | | | org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838528/HBASE-16838-v4.patch | | JIRA Issue | HBASE-16838 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 19c82d087cd3 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / f9c6b66 | | Default Java |
[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same
[ https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656770#comment-15656770 ] ramkrishna.s.vasudevan commented on HBASE-16890: I want to ask some dumb questions. FSHLog calls WAL Sync - internally we write data to FSOutputStream and then we will call a fsdos.sync or fsdos.hflush(). So what ever was written till that point will be transffered from client to DNs and here we mark the packet as 'tosync = true'. Now there is no DFSClient here - instead you try to directly right to DN. But I could see that in any of the packets that you create you don't set 'sync = true' but only when there is endBlock() you say lastPacket=true. So what difference does this create? > Analyze the performance of AsyncWAL and fix the same > > > Key: HBASE-16890 > URL: https://issues.apache.org/jira/browse/HBASE-16890 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 > (2).patch, AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, > AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, > HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, > HBASE-16890-remove-contention-v1.patch, HBASE-16890-remove-contention.patch, > Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 > PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at > 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png, async.svg, > classic.svg, contention.png, contention_defaultWAL.png > > > Tests reveal that AsyncWAL under load in single node cluster performs slower > than the Default WAL. This task is to analyze and see if we could fix it. > See some discussions in the tail of JIRA HBASE-15536. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same
[ https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656724#comment-15656724 ] ramkrishna.s.vasudevan commented on HBASE-16890: bq.ut the actual syncs on DFSOutput are definitely different... Yes. So the wal syncs are made to wait more in AsyncFSWAL. In FSHLog many of those syncs wait for less than 100 ms. > Analyze the performance of AsyncWAL and fix the same > > > Key: HBASE-16890 > URL: https://issues.apache.org/jira/browse/HBASE-16890 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 > (2).patch, AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, > AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, > HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, > HBASE-16890-remove-contention-v1.patch, HBASE-16890-remove-contention.patch, > Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 > PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at > 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png, async.svg, > classic.svg, contention.png, contention_defaultWAL.png > > > Tests reveal that AsyncWAL under load in single node cluster performs slower > than the Default WAL. This task is to analyze and see if we could fix it. > See some discussions in the tail of JIRA HBASE-15536. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same
[ https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656718#comment-15656718 ] Duo Zhang commented on HBASE-16890: --- {quote} we can assume that the number of sync futures that gets created are almost same {quote} Why? As in the above WALPE, the wal.syncs are the same, but the actual syncs on DFSOutput are definitely different... > Analyze the performance of AsyncWAL and fix the same > > > Key: HBASE-16890 > URL: https://issues.apache.org/jira/browse/HBASE-16890 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 > (2).patch, AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, > AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, > HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, > HBASE-16890-remove-contention-v1.patch, HBASE-16890-remove-contention.patch, > Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 > PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at > 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png, async.svg, > classic.svg, contention.png, contention_defaultWAL.png > > > Tests reveal that AsyncWAL under load in single node cluster performs slower > than the Default WAL. This task is to analyze and see if we could fix it. > See some discussions in the tail of JIRA HBASE-15536. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same
[ https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656705#comment-15656705 ] ramkrishna.s.vasudevan commented on HBASE-16890: With checksum disabled and with 10G of data with 50 threads, we can assume that the number of sync futures that gets created are almost same, then I could see that there are around 2000 additional syncs that are waiting for around 100 to 300 ms to get done. In FSHLOG case syncs waiting betwenn 100 to 300 ms is 2300 where as in AsyncFSHLog it is 4100. As stated in my previous comment I see that the flushes and log rolls are over all same. > Analyze the performance of AsyncWAL and fix the same > > > Key: HBASE-16890 > URL: https://issues.apache.org/jira/browse/HBASE-16890 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 > (2).patch, AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, > AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, > HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, > HBASE-16890-remove-contention-v1.patch, HBASE-16890-remove-contention.patch, > Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 > PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at > 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png, async.svg, > classic.svg, contention.png, contention_defaultWAL.png > > > Tests reveal that AsyncWAL under load in single node cluster performs slower > than the Default WAL. This task is to analyze and see if we could fix it. > See some discussions in the tail of JIRA HBASE-15536. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17058) Lower epsilon used for jitter verification from HBASE-15324
[ https://issues.apache.org/jira/browse/HBASE-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656689#comment-15656689 ] Esteban Gutierrez commented on HBASE-17058: --- Thanks for the review [~carp84], I will commit tomorrow if there is no other objection. > Lower epsilon used for jitter verification from HBASE-15324 > --- > > Key: HBASE-17058 > URL: https://issues.apache.org/jira/browse/HBASE-17058 > Project: HBase > Issue Type: Bug > Components: Compaction >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez > Attachments: HBASE-17058.master.001.patch > > > The current epsilon used is 1E-6 and its too big it might overflow the > desiredMaxFileSize. A trivial fix is to lower the epsilon to 2^-52 or even > 2^-53. An option to consider too is just to shift the jitter to always > decrement hbase.hregion.max.filesize (MAX_FILESIZE) instead of increase the > size of the region and having to deal with the round off. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17058) Lower epsilon used for jitter verification from HBASE-15324
[ https://issues.apache.org/jira/browse/HBASE-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656680#comment-15656680 ] Yu Li commented on HBASE-17058: --- Patch lgtm, +1. Epsilon is a more careful way to check whether a double equals to zero and no need for greater/less check. Thanks for the fix [~esteban]. > Lower epsilon used for jitter verification from HBASE-15324 > --- > > Key: HBASE-17058 > URL: https://issues.apache.org/jira/browse/HBASE-17058 > Project: HBase > Issue Type: Bug > Components: Compaction >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez > Attachments: HBASE-17058.master.001.patch > > > The current epsilon used is 1E-6 and its too big it might overflow the > desiredMaxFileSize. A trivial fix is to lower the epsilon to 2^-52 or even > 2^-53. An option to consider too is just to shift the jitter to always > decrement hbase.hregion.max.filesize (MAX_FILESIZE) instead of increase the > size of the region and having to deal with the round off. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-2256) Delete row, followed quickly to put of the same row will sometimes fail.
[ https://issues.apache.org/jira/browse/HBASE-2256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656659#comment-15656659 ] gfeng commented on HBASE-2256: -- It happened in HBase 1.2.3. My code is {code} Delete del = new Delete(row.getBytes()); table.delete(del); List puts = myPuts(); table.put(puts); {code} Most of time worked fine. But sometimes the data in HBase went crazy. So I don't trust the data stored. > Delete row, followed quickly to put of the same row will sometimes fail. > > > Key: HBASE-2256 > URL: https://issues.apache.org/jira/browse/HBASE-2256 > Project: HBase > Issue Type: Bug >Affects Versions: 0.20.3 >Reporter: Clint Morgan > Attachments: hbase-2256.patch > > > Doing a Delete of a whole row, followed immediately by a put to that row will > sometimes miss a cell. Attached is a test to provoke the issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17058) Lower epsilon used for jitter verification from HBASE-15324
[ https://issues.apache.org/jira/browse/HBASE-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esteban Gutierrez updated HBASE-17058: -- Attachment: HBASE-17058.master.001.patch [~carp84] & [~huaxiang]: attached patch with the jitterRate > 0 approach, it is simpler to understand. > Lower epsilon used for jitter verification from HBASE-15324 > --- > > Key: HBASE-17058 > URL: https://issues.apache.org/jira/browse/HBASE-17058 > Project: HBase > Issue Type: Bug > Components: Compaction >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez > Attachments: HBASE-17058.master.001.patch > > > The current epsilon used is 1E-6 and its too big it might overflow the > desiredMaxFileSize. A trivial fix is to lower the epsilon to 2^-52 or even > 2^-53. An option to consider too is just to shift the jitter to always > decrement hbase.hregion.max.filesize (MAX_FILESIZE) instead of increase the > size of the region and having to deal with the round off. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17058) Lower epsilon used for jitter verification from HBASE-15324
[ https://issues.apache.org/jira/browse/HBASE-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esteban Gutierrez updated HBASE-17058: -- Status: Patch Available (was: Open) > Lower epsilon used for jitter verification from HBASE-15324 > --- > > Key: HBASE-17058 > URL: https://issues.apache.org/jira/browse/HBASE-17058 > Project: HBase > Issue Type: Bug > Components: Compaction >Affects Versions: 1.2.4, 1.1.7, 2.0.0, 1.3.0, 1.4.0 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez > Attachments: HBASE-17058.master.001.patch > > > The current epsilon used is 1E-6 and its too big it might overflow the > desiredMaxFileSize. A trivial fix is to lower the epsilon to 2^-52 or even > 2^-53. An option to consider too is just to shift the jitter to always > decrement hbase.hregion.max.filesize (MAX_FILESIZE) instead of increase the > size of the region and having to deal with the round off. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656626#comment-15656626 ] ramkrishna.s.vasudevan commented on HBASE-16417: And one thing to say is that I have not tried read-write workload. But with 16G and with MSLAB and chunkpool on - may be we have heap reserved for it and the remaining block cache - is it sufficient ? Its good to check it out. Wil give it a try. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656617#comment-15656617 ] ramkrishna.s.vasudevan commented on HBASE-16417: Can you try with Xmx=30G? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15484) Correct the semantic of batch and partial
[ https://issues.apache.org/jira/browse/HBASE-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656613#comment-15656613 ] Yu Li commented on HBASE-15484: --- Thanks [~yangzhe1991] for the ping. bq. The scan API is too confusing now with caching, batching, allowPartialResults, setMaxResultsPerColumnFamily, and maxResultSize. We are not making the life of the user easy Cannot agree more, it's really complicated and confusing, even for developers I'd say... bq. I think only allowPartial and maxResultSize are needed... setBatch might be still useful for a paging kind of results presentation way? Agree we only maintain these two semantics. For the paging case, could we use {{setCaching}} to achieve the goal? Like if we want to display 20 rows per page, just set caching to 20, call scan.next and display each time? Or please correct me if I misunderstood the meaning of "Paging" here (Smile) [~anoop.hbase] bq. Should we get rid of batching and caching, setMaxResultsPerColumnFamily (turn them into no-ops) and only do allowPartialResults and maxResultSize? How radical it will be for 2.0? This is some interface change (incompatibility) and would cause additional efforts for migration from 1.x to 2.0. I'm not that familiar with our strategy here but maybe slowing down a little bit is better? Like marking the interface deprecated and invokes the unified logic internal in 2.0, then remove them in some later releases? I'm not that confident about educating customers, maybe my limited personal experience though :-) > Correct the semantic of batch and partial > - > > Key: HBASE-15484 > URL: https://issues.apache.org/jira/browse/HBASE-15484 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.1.3 >Reporter: Phil Yang >Assignee: Phil Yang > Fix For: 2.0.0 > > Attachments: HBASE-15484-v1.patch, HBASE-15484-v2.patch, > HBASE-15484-v3.patch, HBASE-15484-v4.patch > > > Follow-up to HBASE-15325, as discussed, the meaning of setBatch and > setAllowPartialResults should not be same. We should not regard setBatch as > setAllowPartialResults. > And isPartial should be define accurately. > (Considering getBatch==MaxInt if we don't setBatch.) If > result.rawcells.length row, isPartial==true, otherwise isPartial == false. So if user don't > setAllowPartialResults(true), isPartial should always be false. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656597#comment-15656597 ] Eshcar Hillel commented on HBASE-16417: --- Number of regions in all experiments is 50. Figure 3 compares default memstore with mslabs (blue column) and without mslabs (purple column) in a PE run with write only workload uniform distribution. Here are the absolute numbers: || write latencies ||MB/s||50th||75th||95th||99th||99.9th||99.99th||99.999th|| |no compaction|207.2| 57.3| 60.1| 80.7| 9608| 55362| 162462.6| 816825| |no compaction (no mslab)|207.88| 57.0| 59.9| 85.2| 9496| 55521| 167699.7| 933907| Figure 6 compares default memstore with mslabs (blue column) and without mslabs (purple column) in a YCSB run with mixed workload zipfian distribution. Here are the absolute numbers: || read latencies|| op/s|| #gc|| avg|| 50th||75th||95th|| 99th|| |no compaction| 5673| 7693| 3596.0| 2109| 3061.0| 4303| 5535.0| |no compaction (no mslab)| 7900| 6601| 2586| 2137| 3027.0| 4163| 5211.0| I can re-run the benchmarks if you believe this is in contradiction with your previous findings. But the HW is different and might affect the results. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17071) Do not initialize MemstoreChunkPool when use mslab option is turned off
[ https://issues.apache.org/jira/browse/HBASE-17071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656593#comment-15656593 ] Hadoop QA commented on HBASE-17071: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 4s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 47s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 6s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 119m 58s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.client.TestMetaWithReplicas | | | org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor | | | org.apache.hadoop.hbase.client.TestHCM | | | org.apache.hadoop.hbase.client.TestTableSnapshotScanner | | | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838517/HBASE-17071.patch | | JIRA Issue | HBASE-17071 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux f632b66dd1a0 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / f9c6b66 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/4433/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/4433/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/4433/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/4433/console | |
[jira] [Commented] (HBASE-15484) Correct the semantic of batch and partial
[ https://issues.apache.org/jira/browse/HBASE-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656551#comment-15656551 ] Phil Yang commented on HBASE-15484: --- And I am not sure if the "paging" works if users don't want to limit the time/size but want to display a limited number of cells to a upper level, because the scanner is stateful and we must handle all results before the scanner closed. It is a continuous flow for the whole Scan, we may have to deal with all Cells in fact? I filed a issue, HBASE-15576 (sorry I have some higher priority work so this issue has not any progress..), we can pass a Cell with cf/cq/mvcc to server so we can "page" more powerfully. > Correct the semantic of batch and partial > - > > Key: HBASE-15484 > URL: https://issues.apache.org/jira/browse/HBASE-15484 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.1.3 >Reporter: Phil Yang >Assignee: Phil Yang > Fix For: 2.0.0 > > Attachments: HBASE-15484-v1.patch, HBASE-15484-v2.patch, > HBASE-15484-v3.patch, HBASE-15484-v4.patch > > > Follow-up to HBASE-15325, as discussed, the meaning of setBatch and > setAllowPartialResults should not be same. We should not regard setBatch as > setAllowPartialResults. > And isPartial should be define accurately. > (Considering getBatch==MaxInt if we don't setBatch.) If > result.rawcells.length row, isPartial==true, otherwise isPartial == false. So if user don't > setAllowPartialResults(true), isPartial should always be false. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16838) Implement basic scan
[ https://issues.apache.org/jira/browse/HBASE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-16838: -- Attachment: HBASE-16838-v4.patch > Implement basic scan > > > Key: HBASE-16838 > URL: https://issues.apache.org/jira/browse/HBASE-16838 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-16838-v1.patch, HBASE-16838-v2.patch, > HBASE-16838-v3.patch, HBASE-16838-v4.patch, HBASE-16838.patch > > > Implement a scan works like the grpc streaming call that all returned results > will be passed to a ScanConsumer. The methods of the consumer will be called > directly in the rpc framework threads so it is not allowed to do time > consuming work in the methods. So in general only experts or the > implementation of other methods in AsyncTable can call this method directly, > that's why I call it 'basic scan'. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-15484) Correct the semantic of batch and partial
[ https://issues.apache.org/jira/browse/HBASE-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656513#comment-15656513 ] Phil Yang edited comment on HBASE-15484 at 11/11/16 8:43 AM: - For caching we had some discussion in HBASE-16987 and HBASE-16973. Using size/time limit is more direct than setCache for users because usually they setLimit because they want to limit size/time, and now by default we set cache to max_value. Paging in cell level is a possible scene. It is different from "limit" which Duo mentions because limit means we can stop and close the scanner, but batch means we should pause and wait next call. Since we have size/time limit at server side, a large row will not result in OOM at server even users don't setBatch. If users indeed need setBatch to limit the max number of cells for one Result returns, I think we can keep setBatch interface but change it to a client-only logic. In server we only consider size/time limit, and if we return more than batch cells, we can cache the rest of them in client? By this changing, we can decrease the number of RPC requests without OOM/Timeout risk. [~stack] [~carp84] [~mantonov] FYI, you also had some ideas about scanning in HBASE-16973 :) Thanks. was (Author: yangzhe1991): For caching we had some discussion in HBASE-16987 and HBASE-16973. Using size/time limit is more direct than setCache for users because usually they setLimit because they want to limit size/time, and now by default we set cache to max_value. Paging in cell level is a possible scene. It is different from "limit" which Duo mentions because limit means we can stop and close the scanner, but batch means we should pause and wait next call. Since we have size/time limit at server side, a large row will not result in OOM at server even users don't setBatch. If users indeed need setBatch to limit the max number of cells for one Result returns, I think we can keep setBatch interface but change it to a client-only logic. In server we only consider size/time limit, and if we return more than batch cells, we can cache them in client? By this changing, we can decrease the number of RPC requests without OOM/Timeout risk. [~stack] [~carp84] [~mantonov] FYI, you also had some ideas about scanning in HBASE-16973 :) Thanks. > Correct the semantic of batch and partial > - > > Key: HBASE-15484 > URL: https://issues.apache.org/jira/browse/HBASE-15484 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.1.3 >Reporter: Phil Yang >Assignee: Phil Yang > Fix For: 2.0.0 > > Attachments: HBASE-15484-v1.patch, HBASE-15484-v2.patch, > HBASE-15484-v3.patch, HBASE-15484-v4.patch > > > Follow-up to HBASE-15325, as discussed, the meaning of setBatch and > setAllowPartialResults should not be same. We should not regard setBatch as > setAllowPartialResults. > And isPartial should be define accurately. > (Considering getBatch==MaxInt if we don't setBatch.) If > result.rawcells.length row, isPartial==true, otherwise isPartial == false. So if user don't > setAllowPartialResults(true), isPartial should always be false. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-15484) Correct the semantic of batch and partial
[ https://issues.apache.org/jira/browse/HBASE-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656513#comment-15656513 ] Phil Yang edited comment on HBASE-15484 at 11/11/16 8:37 AM: - For caching we had some discussion in HBASE-16987 and HBASE-16973. Using size/time limit is more direct than setCache for users because usually they setLimit because they want to limit size/time, and now by default we set cache to max_value. Paging in cell level is a possible scene. It is different from "limit" which Duo mentions because limit means we can stop and close the scanner, but batch means we should pause and wait next call. Since we have size/time limit at server side, a large row will not result in OOM at server even users don't setBatch. If users indeed need setBatch to limit the max number of cells for one Result returns, I think we can keep setBatch interface but change it to a client-only logic. In server we only consider size/time limit, and if we return more than batch cells, we can cache them in client? By this changing, we can decrease the number of RPC requests without OOM/Timeout risk. [~stack] [~carp84] [~mantonov] FYI, you also had some ideas about scanning in HBASE-16973 :) Thanks. was (Author: yangzhe1991): For caching we had some discussion in HBASE-16987 and HBASE-16973. Using size/time limit is more direct than setCache for users because usually they setLimit because they want to limit size/time, and now by default we set cache to max_value. Paging in cell level is a possible scene. It is different from "limit" which Duo mentions because limit means we can stop and close the scanner, but batch means we should pause and wait next call. Since we have size/time limit at server side, a large row will not result in OOM at server even users don't setBatch. I think we can keep setBatch interface but change it to a client-only logic. In server we only consider size/time limit, and if we return more than batch cells, we can cache them in client? By this changing, we can decrease the number of RPC requests without OOM/Timeout risk. [~stack] [~carp84] [~mantonov] FYI, you also had some ideas about scanning in HBASE-16973 :) Thanks. > Correct the semantic of batch and partial > - > > Key: HBASE-15484 > URL: https://issues.apache.org/jira/browse/HBASE-15484 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.1.3 >Reporter: Phil Yang >Assignee: Phil Yang > Fix For: 2.0.0 > > Attachments: HBASE-15484-v1.patch, HBASE-15484-v2.patch, > HBASE-15484-v3.patch, HBASE-15484-v4.patch > > > Follow-up to HBASE-15325, as discussed, the meaning of setBatch and > setAllowPartialResults should not be same. We should not regard setBatch as > setAllowPartialResults. > And isPartial should be define accurately. > (Considering getBatch==MaxInt if we don't setBatch.) If > result.rawcells.length row, isPartial==true, otherwise isPartial == false. So if user don't > setAllowPartialResults(true), isPartial should always be false. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15484) Correct the semantic of batch and partial
[ https://issues.apache.org/jira/browse/HBASE-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656513#comment-15656513 ] Phil Yang commented on HBASE-15484: --- For caching we had some discussion in HBASE-16987 and HBASE-16973. Using size/time limit is more direct than setCache for users because usually they setLimit because they want to limit size/time, and now by default we set cache to max_value. Paging in cell level is a possible scene. It is different from "limit" which Duo mentions because limit means we can stop and close the scanner, but batch means we should pause and wait next call. Since we have size/time limit at server side, a large row will not result in OOM at server even users don't setBatch. I think we can keep setBatch interface but change it to a client-only logic. In server we only consider size/time limit, and if we return more than batch cells, we can cache them in client? By this changing, we can decrease the number of RPC requests without OOM/Timeout risk. [~stack] [~carp84] [~mantonov] FYI, you also had some ideas about scanning in HBASE-16973 :) Thanks. > Correct the semantic of batch and partial > - > > Key: HBASE-15484 > URL: https://issues.apache.org/jira/browse/HBASE-15484 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.1.3 >Reporter: Phil Yang >Assignee: Phil Yang > Fix For: 2.0.0 > > Attachments: HBASE-15484-v1.patch, HBASE-15484-v2.patch, > HBASE-15484-v3.patch, HBASE-15484-v4.patch > > > Follow-up to HBASE-15325, as discussed, the meaning of setBatch and > setAllowPartialResults should not be same. We should not regard setBatch as > setAllowPartialResults. > And isPartial should be define accurately. > (Considering getBatch==MaxInt if we don't setBatch.) If > result.rawcells.length row, isPartial==true, otherwise isPartial == false. So if user don't > setAllowPartialResults(true), isPartial should always be false. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16972) Log more details for Scan#next request when responseTooSlow
[ https://issues.apache.org/jira/browse/HBASE-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656484#comment-15656484 ] Hudson commented on HBASE-16972: SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #65 (See [https://builds.apache.org/job/HBase-1.3-JDK7/65/]) HBASE-16972 Log more details for Scan#next request when responseTooSlow (liyu: rev 996b4847fa3867e9b69e6f35727732836354f7a3) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java > Log more details for Scan#next request when responseTooSlow > --- > > Key: HBASE-16972 > URL: https://issues.apache.org/jira/browse/HBASE-16972 > Project: HBase > Issue Type: Improvement > Components: Operability >Affects Versions: 1.2.3, 1.1.7 >Reporter: Yu Li >Assignee: Yu Li > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8 > > Attachments: HBASE-16972.patch, HBASE-16972.v2.patch, > HBASE-16972.v3.patch > > > Currently for if responseTooSlow happens on the scan.next call, we will get > warn log like below: > {noformat} > 2016-10-31 11:43:23,430 WARN > [RpcServer.FifoWFPBQ.priority.handler=5,queue=1,port=60193] > ipc.RpcServer(2574): > (responseTooSlow): > {"call":"Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)", > "starttimems":1477885403428,"responsesize":52,"method":"Scan","param":"scanner_id: > 11 number_of_rows: 2147483647 > close_scanner: false next_call_seq: 0 client_handles_partials: true > client_handles_heartbeats: true > track_scan_metrics: false renew: > false","processingtimems":2,"client":"127.0.0.1:60254","queuetimems":0,"class":"HMaster"} > {noformat} > From which we only have a {{scanner_id}} and impossible to know what exactly > this scan is about, like against which region of which table. > After this JIRA, we will improve the message to something like below (notice > the last line): > {noformat} > 2016-10-31 11:43:23,430 WARN > [RpcServer.FifoWFPBQ.priority.handler=5,queue=1,port=60193] > ipc.RpcServer(2574): > (responseTooSlow): > {"call":"Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)", > "starttimems":1477885403428,"responsesize":52,"method":"Scan","param":"scanner_id: > 11 number_of_rows: 2147483647 > close_scanner: false next_call_seq: 0 client_handles_partials: true > client_handles_heartbeats: true > track_scan_metrics: false renew: > false","processingtimems":2,"client":"127.0.0.1:60254","queuetimems":0,"class":"HMaster", > "scandetails":"table: hbase:meta region: hbase:meta,,1.1588230740"} > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17020) keylen in midkey() dont computed correctly
[ https://issues.apache.org/jira/browse/HBASE-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656466#comment-15656466 ] Hudson commented on HBASE-17020: SUCCESS: Integrated in Jenkins build HBase-1.1-JDK8 #1901 (See [https://builds.apache.org/job/HBase-1.1-JDK8/1901/]) HBASE-17020 keylen in midkey() dont computed correctly (liyu: rev 23e168d0b4aba2307ec9735da1cdeb8569ed9f63) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java > keylen in midkey() dont computed correctly > -- > > Key: HBASE-17020 > URL: https://issues.apache.org/jira/browse/HBASE-17020 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4 >Reporter: Yu Sun >Assignee: Yu Sun > Fix For: 2.0.0, 1.4.0, 1.2.5, 0.98.24, 1.1.8 > > Attachments: HBASE-17020-branch-0.98.patch, HBASE-17020-v1.patch, > HBASE-17020-v2.patch, HBASE-17020-v2.patch, HBASE-17020-v3-branch1.1.patch, > HBASE-17020.branch-0.98.patch, HBASE-17020.branch-0.98.patch, > HBASE-17020.branch-1.1.patch > > > in CellBasedKeyBlockIndexReader.midkey(): > {code} > ByteBuff b = midLeafBlock.getBufferWithoutHeader(); > int numDataBlocks = b.getIntAfterPosition(0); > int keyRelOffset = b.getIntAfterPosition(Bytes.SIZEOF_INT * > (midKeyEntry + 1)); > int keyLen = b.getIntAfterPosition(Bytes.SIZEOF_INT * (midKeyEntry > + 2)) - keyRelOffset; > {code} > the local varible keyLen get this should be total length of: > SECONDARY_INDEX_ENTRY_OVERHEAD + firstKey.length; > the code is: > {code} > void add(byte[] firstKey, long blockOffset, int onDiskDataSize, > long curTotalNumSubEntries) { > // Record the offset for the secondary index > secondaryIndexOffsetMarks.add(curTotalNonRootEntrySize); > curTotalNonRootEntrySize += SECONDARY_INDEX_ENTRY_OVERHEAD > + firstKey.length; > {code} > when the midkey last entry of a leaf-level index block, this may throw: > {quote} > 2016-10-01 12:27:55,186 ERROR [MemStoreFlusher.0] > regionserver.MemStoreFlusher: Cache flusher failed for entry [flush region > pora_6_item_feature,0061:,1473838922457.12617bc4ebbfd171018bf96ac9bdd2a7.] > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray(ByteBufferUtils.java:936) > at > org.apache.hadoop.hbase.nio.SingleByteBuff.toBytes(SingleByteBuff.java:303) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.midkey(HFileBlockIndex.java:419) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.midkey(HFileReaderImpl.java:1519) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Reader.midkey(StoreFile.java:1520) > at > org.apache.hadoop.hbase.regionserver.StoreFile.getFileSplitPoint(StoreFile.java:706) > at > org.apache.hadoop.hbase.regionserver.DefaultStoreFileManager.getSplitPoint(DefaultStoreFileManager.java:126) > at > org.apache.hadoop.hbase.regionserver.HStore.getSplitPoint(HStore.java:1983) > at > org.apache.hadoop.hbase.regionserver.ConstantFamilySizeRegionSplitPolicy.getSplitPoint(ConstantFamilySizeRegionSplitPolicy.java:77) > at > org.apache.hadoop.hbase.regionserver.HRegion.checkSplit(HRegion.java:7756) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:513) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259) > at java.lang.Thread.run(Thread.java:756) > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17060) backport HBASE-16570 to 1.3.1
[ https://issues.apache.org/jira/browse/HBASE-17060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] binlijin updated HBASE-17060: - Attachment: HBASE-17060.branch-1.3.v1.patch > backport HBASE-16570 to 1.3.1 > - > > Key: HBASE-17060 > URL: https://issues.apache.org/jira/browse/HBASE-17060 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.3.0 >Reporter: Yu Li >Assignee: binlijin > Fix For: 1.3.1 > > Attachments: HBASE-17060.branch-1.3.v1.patch > > > Need some backport after 1.3.0 got released -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-14882) Provide a Put API that adds the provided family, qualifier, value without copying
[ https://issues.apache.org/jira/browse/HBASE-14882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656453#comment-15656453 ] Anoop Sam John edited comment on HBASE-14882 at 11/11/16 8:02 AM: -- So now we clearly know, why this API is not client end alone. The code paths within server also uses this API. eg: When adding to meta table, we make a put and add cells using this API. This happens within server end and the cell as such reaches region. We have assumptions abt the cell there. That it is having SettableSeqId implemented etc. So we can not just avoid that. Normal APIs in Put u can see create KeyValue and there u can see impl ExtendedCell instead of Cell. Ya this is because KV is used server end. We need cells flowing in server to be of this new type. As this new Cell impl also used in server side also, we can not avoid that also impl ExtendedCell !! May be need to add fat comment lines in the new class like above that why at client end, we have a Cell with ExtendedCell being used. Any way we have it in hbase-common. That is good. wdyt? was (Author: anoop.hbase): So now we clearly know, why this API is not client end alone. The code paths within server also uses this API. eg: When adding to meta table, we make a put and add cells using this API. This happens within server end and the cell as such reaches region. We have assumptions abt the cell there. That it is having SettableSeqId implemented etc. So we can not just avoid that. Normal APIs in Put u can see create KeyValue and there u can see impl ExtendedCell instead of Cell. Ya this is because KV is used server end. We need cells flowing in server to be of this new type. As this new Cell impl also used in server side also, we can not avoid that also impl ExtendedCell !! May be need to add fat comment lines in the new class like above that why at client end, we have a Cell with ExtendedCell being used. Any way we have it in hbase-common. That is good. w dyt? > Provide a Put API that adds the provided family, qualifier, value without > copying > - > > Key: HBASE-14882 > URL: https://issues.apache.org/jira/browse/HBASE-14882 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.0 >Reporter: Jerry He >Assignee: Xiang Li > Fix For: 2.0.0 > > Attachments: HBASE-14882.master.000.patch, > HBASE-14882.master.001.patch, HBASE-14882.master.002.patch, > HBASE-14882.master.003.patch > > > In the Put API, we have addImmutable() > {code} > /** >* See {@link #addColumn(byte[], byte[], byte[])}. This version expects >* that the underlying arrays won't change. It's intended >* for usage internal HBase to and for advanced client applications. >*/ > public Put addImmutable(byte [] family, byte [] qualifier, byte [] value) > {code} > But in the implementation, the family, qualifier and value are still being > copied locally to create kv. > Hopefully we should provide an API that truly uses immutable family, > qualifier and value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14882) Provide a Put API that adds the provided family, qualifier, value without copying
[ https://issues.apache.org/jira/browse/HBASE-14882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656453#comment-15656453 ] Anoop Sam John commented on HBASE-14882: So now we clearly know, why this API is not client end alone. The code paths within server also uses this API. eg: When adding to meta table, we make a put and add cells using this API. This happens within server end and the cell as such reaches region. We have assumptions abt the cell there. That it is having SettableSeqId implemented etc. So we can not just avoid that. Normal APIs in Put u can see create KeyValue and there u can see impl ExtendedCell instead of Cell. Ya this is because KV is used server end. We need cells flowing in server to be of this new type. As this new Cell impl also used in server side also, we can not avoid that also impl ExtendedCell !! May be need to add fat comment lines in the new class like above that why at client end, we have a Cell with ExtendedCell being used. Any way we have it in hbase-common. That is good. w dyt? > Provide a Put API that adds the provided family, qualifier, value without > copying > - > > Key: HBASE-14882 > URL: https://issues.apache.org/jira/browse/HBASE-14882 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.0 >Reporter: Jerry He >Assignee: Xiang Li > Fix For: 2.0.0 > > Attachments: HBASE-14882.master.000.patch, > HBASE-14882.master.001.patch, HBASE-14882.master.002.patch, > HBASE-14882.master.003.patch > > > In the Put API, we have addImmutable() > {code} > /** >* See {@link #addColumn(byte[], byte[], byte[])}. This version expects >* that the underlying arrays won't change. It's intended >* for usage internal HBase to and for advanced client applications. >*/ > public Put addImmutable(byte [] family, byte [] qualifier, byte [] value) > {code} > But in the implementation, the family, qualifier and value are still being > copied locally to create kv. > Hopefully we should provide an API that truly uses immutable family, > qualifier and value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17073) Increase the max number of buffers in ByteBufferPool
[ https://issues.apache.org/jira/browse/HBASE-17073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656452#comment-15656452 ] ramkrishna.s.vasudevan commented on HBASE-17073: +1 to increase. We saw that with 100 threads in PE tool the default config was just not enough (even when we had 100 handlers). > Increase the max number of buffers in ByteBufferPool > > > Key: HBASE-17073 > URL: https://issues.apache.org/jira/browse/HBASE-17073 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > > Before the HBASE-15525 issue fix, we had variable sized buffers in our buffer > pool. The max size upto which one buffer can grow was 2 MB. Now we have > changed it to be a fixed sized BBPool. By default 64 KB is the size of each > buffer. But the max number of BBs allowed to be in the pool was not changed. > ie. twice the number of handlers. May be we should be changing increasing it > now? To make it equal to the way like 2 MB, we will need 32 * 2 * handlers. > There is no initial #BBs any way. 2 MB is the default max response size what > we have. And write reqs also, when it is Buffered mutator 2 MB is the default > flush limit. We can make it to be 32 * #handlers as the def max #BBs I > believe. -- This message was sent by Atlassian JIRA (v6.3.4#6332)