[jira] [Commented] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool
[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376440#comment-15376440 ] Hudson commented on HBASE-16095: SUCCESS: Integrated in HBase-1.3 #784 (See [https://builds.apache.org/job/HBase-1.3/784/]) HBASE-16095 Add priority to TableDescriptor and priority region open (enis: rev ab1e0dd440ee53e03d0ebfa8f0f0b27d585880a5) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenPriorityRegionHandler.java * hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java * hbase-client/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java * hbase-client/src/main/java/org/apache/hadoop/hbase/executor/EventType.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionOpen.java * hbase-server/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java * hbase-client/src/main/java/org/apache/hadoop/hbase/executor/ExecutorType.java > Add priority to TableDescriptor and priority region open thread pool > > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21 > > Attachments: HBASE-16095-0.98.patch, HBASE-16095-0.98.patch, > hbase-16095_v0.patch, hbase-16095_v1.patch, hbase-16095_v2.patch, > hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376425#comment-15376425 ] binlijin edited comment on HBASE-16205 at 7/14/16 6:47 AM: --- I mean if a request have two cell: cell1=100k cell2=266k, they all share a common byte[]=367K. The share byte[] bigger than 256K, so what if we do not copy cell1 to MSLAB and do not deep clone cell2? Because the share byte[] is bigger enough. was (Author: aoxiang): I mean if a request have two cell: cell1=100k cell2=256k, they all share a common byte[]=357K. The share byte[] bigger than 256K, so what if we do not copy cell1 to MSLAB and do not deep clone cell2? Because the share byte[] is bigger enough. > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376425#comment-15376425 ] binlijin commented on HBASE-16205: -- I mean if a request have two cell: cell1=100k cell2=256k, they all share a common byte[]=357K. The share byte[] bigger than 256K, so what if we do not copy cell1 to MSLAB and do not deep clone cell2? Because the share byte[] is bigger enough. > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376411#comment-15376411 ] Anoop Sam John commented on HBASE-16205: Yes when the Cell's length is big then MSLAB copy won't happen. {code} static final String MAX_ALLOC_KEY = "hbase.hregion.memstore.mslab.max.allocation"; static final int MAX_ALLOC_DEFAULT = 256 * 1024; // allocs bigger than this don't go through // allocator {code} > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376408#comment-15376408 ] Hadoop QA commented on HBASE-16169: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 52s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 38s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 9m 2s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 40s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 46s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 9m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 46m 44s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s {color} | {color:green} hbase-protocol in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 20s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 149m 29s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color
[jira] [Comment Edited] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376404#comment-15376404 ] binlijin edited comment on HBASE-16205 at 7/14/16 6:29 AM: --- I mean if the byte[] is bigger enough, should be all cells that backend by the big byte[] not copy to the MSLAB? was (Author: aoxiang): I mean if the byte[] is bigger enough, should all cells that backend by the big byte[] not copy to the MSLAB? > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376404#comment-15376404 ] binlijin commented on HBASE-16205: -- I mean if the byte[] is bigger enough, should all cells that backend by the big byte[] not copy to the MSLAB? > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely
[ https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376399#comment-15376399 ] Duo Zhang commented on HBASE-16144: --- [~tedyu] Let's commit v6 patch? The failed UTs are all in TestAcidGuarantees, unrelated. Thanks. > Replication queue's lock will live forever if RS acquiring the lock has died > prematurely > > > Key: HBASE-16144 > URL: https://issues.apache.org/jira/browse/HBASE-16144 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.1, 1.1.5, 0.98.20 >Reporter: Phil Yang >Assignee: Phil Yang > Attachments: HBASE-16144-0.98.v1.patch, > HBASE-16144-branch-1-v1.patch, HBASE-16144-branch-1-v2.patch, > HBASE-16144-branch-1.1-v1.patch, HBASE-16144-branch-1.1-v2.patch, > HBASE-16144-v1.patch, HBASE-16144-v2.patch, HBASE-16144-v3.patch, > HBASE-16144-v4.patch, HBASE-16144-v5.patch, HBASE-16144-v6.patch, > HBASE-16144-v6.patch > > > In default, we will use multi operation when we claimQueues from ZK. But if > we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy > nodes, finally clean old queue and the lock. > However, if the RS acquiring the lock crash before claimQueues done, the lock > will always be there and other RS can never claim the queue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely
[ https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376392#comment-15376392 ] Phil Yang commented on HBASE-16144: --- Any more comments? Thanks > Replication queue's lock will live forever if RS acquiring the lock has died > prematurely > > > Key: HBASE-16144 > URL: https://issues.apache.org/jira/browse/HBASE-16144 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.1, 1.1.5, 0.98.20 >Reporter: Phil Yang >Assignee: Phil Yang > Attachments: HBASE-16144-0.98.v1.patch, > HBASE-16144-branch-1-v1.patch, HBASE-16144-branch-1-v2.patch, > HBASE-16144-branch-1.1-v1.patch, HBASE-16144-branch-1.1-v2.patch, > HBASE-16144-v1.patch, HBASE-16144-v2.patch, HBASE-16144-v3.patch, > HBASE-16144-v4.patch, HBASE-16144-v5.patch, HBASE-16144-v6.patch, > HBASE-16144-v6.patch > > > In default, we will use multi operation when we claimQueues from ZK. But if > we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy > nodes, finally clean old queue and the lock. > However, if the RS acquiring the lock crash before claimQueues done, the lock > will always be there and other RS can never claim the queue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376391#comment-15376391 ] Anoop Sam John commented on HBASE-16205: In trunk after HBASE-15180, we will not do a copy to create cells in Codec#Decoder. (Pls refer IPCUtil#createCellScannerReusingBuffers) This is with the assumption that the Cells any way will get copied to MSLAB area before adding to memstore. So before HBASE-15180 we were doing 2 copies. Now as said above in 3 cases, the copy of cells to MSLAB area wont happen. In such cases the cells added to Memstore (These will last for longer time until next flush) will refer to the same byte[] where the RPC read the requests. This will be much bigger sized. The RPC request buffer will contain request header and mutation PB bytes etc. So we dont allow bigger sized buffers to get GCed as Cells in memstore still refer those. This deep copy is to avoid such cases. Am I explaining it correctly now? > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool
[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376384#comment-15376384 ] Hudson commented on HBASE-16095: FAILURE: Integrated in HBase-1.4 #287 (See [https://builds.apache.org/job/HBase-1.4/287/]) HBASE-16095 Add priority to TableDescriptor and priority region open (enis: rev 09c7b1e962e9c8dd2bd8749553a0c79c5518ae99) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java * hbase-client/src/main/java/org/apache/hadoop/hbase/executor/EventType.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionOpen.java * hbase-client/src/main/java/org/apache/hadoop/hbase/executor/ExecutorType.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java * hbase-client/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenPriorityRegionHandler.java * hbase-server/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java > Add priority to TableDescriptor and priority region open thread pool > > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21 > > Attachments: HBASE-16095-0.98.patch, HBASE-16095-0.98.patch, > hbase-16095_v0.patch, hbase-16095_v1.patch, hbase-16095_v2.patch, > hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore
[ https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376383#comment-15376383 ] binlijin commented on HBASE-16205: -- If the backend byte[] is bigger enough, all cells do not copy to the MSLAB. So what is the benefits of deep clone the big cell? > When Cells are not copied to MSLAB, deep clone it while adding to Memstore > -- > > Key: HBASE-16205 > URL: https://issues.apache.org/jira/browse/HBASE-16205 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16205.patch > > > This is imp after HBASE-15180 optimization. After that we the cells flowing > in write path will be backed by the same byte[] where the RPC read the > request into. By default we have MSLAB On and so we have a copy operation > while adding Cells to memstore. This copy might not be there if > 1. MSLAB is turned OFF > 2. Cell size is more than a configurable max size. This defaults to 256 KB > 3. If the operation is Append/Increment. > In such cases, we should just clone the Cell into a new byte[] and then add > to memstore. Or else we keep referring to the bigger byte[] chunk for longer > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15213) Fix increment performance regression caused by HBASE-8763 on branch-1.0
[ https://issues.apache.org/jira/browse/HBASE-15213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376340#comment-15376340 ] Allan Yang commented on HBASE-15213: Why not remove WriteQueue directly,since mvcc and the seqId are combined now, mvcc number will advance ensured by seqid's increasing order. Why are we need to wait before previous transaction to finish? > Fix increment performance regression caused by HBASE-8763 on branch-1.0 > --- > > Key: HBASE-15213 > URL: https://issues.apache.org/jira/browse/HBASE-15213 > Project: HBase > Issue Type: Sub-task > Components: Performance >Reporter: Junegunn Choi >Assignee: Junegunn Choi > Fix For: 1.1.4, 1.0.4 > > Attachments: 15157v3.branch-1.1.patch, HBASE-15213-increment.png, > HBASE-15213.branch-1.0.patch, HBASE-15213.v1.branch-1.0.patch > > > This is an attempt to fix the increment performance regression caused by > HBASE-8763 on branch-1.0. > I'm aware that hbase.increment.fast.but.narrow.consistency was added to > branch-1.0 (HBASE-15031) to address the issue and a separate work is ongoing > on master branch, but anyway, this is my take on the problem. > I read through HBASE-14460 and HBASE-8763 but it wasn't clear to me what > caused the slowdown but I could indeed reproduce the performance regression. > Test setup: > - Server: 4-core Xeon 2.4GHz Linux server running mini cluster (100 handlers, > JDK 1.7) > - Client: Another box of the same spec > - Increments on random 10k records on a single-region table, recreated every > time > Increment throughput (TPS): > || Num threads || Before HBASE-8763 (d6cc2fb) || branch-1.0 || branch-1.0 > (narrow-consistency) || > || 1| 2661 | 2486| 2359 | > || 2| 5048 | 5064| 4867 | > || 4| 7503 | 8071| 8690 | > || 8| 10471| 10886 | 13980 | > || 16 | 15515| 9418| 18601 | > || 32 | 17699| 5421| 20540 | > || 64 | 20601| 4038| 25591 | > || 96 | 19177| 3891| 26017 | > We can clearly observe that the throughtput degrades as we increase the > number of concurrent requests, which led me to believe that there's severe > context switching overhead and I could indirectly confirm that suspicion with > cs entry in vmstat output. branch-1.0 shows a much higher number of context > switches even with much lower throughput. > Here are the observations: > - WriteEntry in the writeQueue can only be removed by the very handler that > put it, only when it is at the front of the queue and marked complete. > - Since a WriteEntry is marked complete after the wait-loop, only one entry > can be removed at a time. > - This stringent condition causes O(N^2) context switches where n is the > number of concurrent handlers processing requests. > So what I tried here is to mark WriteEntry complete before we go into > wait-loop. With the change, multiple WriteEntries can be shifted at a time > without context switches. I changed writeQueue to LinkedHashSet since fast > containment check is needed as WriteEntry can be removed by any handler. > The numbers look good, it's virtually identical to pre-HBASE-8763 era. > || Num threads || branch-1.0 with fix || > || 1| 2459 | > || 2| 4976 | > || 4| 8033 | > || 8| 12292| > || 16 | 15234| > || 32 | 16601| > || 64 | 19994| > || 96 | 20052| > So what do you think about it? Please let me know if I'm missing anything. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16224) Reduce the number of RPCs for the large PUTs
[ https://issues.apache.org/jira/browse/HBASE-16224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376308#comment-15376308 ] Hadoop QA commented on HBASE-16224: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 34s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 37s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 29m 26s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 43s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 148m 59s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.master.cleaner.TestSnapshotFromMaster | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12817867/HBASE-16224-v3.patch | | JIRA Issue | HBASE-16224 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency
[jira] [Commented] (HBASE-3727) MultiHFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376305#comment-15376305 ] Anoop Sam John commented on HBASE-3727: --- Add Release Note pls. > MultiHFileOutputFormat > -- > > Key: HBASE-3727 > URL: https://issues.apache.org/jira/browse/HBASE-3727 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0 >Reporter: Andrew Purtell >Assignee: yi liang >Priority: Minor > Attachments: HBASE-3727-V3.patch, HBASE-3727-V4.patch, > HBASE-3727-V5.patch, MH2.patch, MultiHFileOutputFormat.java, > MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, > TestMultiHFileOutputFormat.java > > > Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an > IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on > demand that produce HFiles in per-table subdirectories of the configured > output path. Does not currently support partitioning for existing tables / > incremental update. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16195) Should not add chunk into chunkQueue if not using chunk pool in HeapMemStoreLAB
[ https://issues.apache.org/jira/browse/HBASE-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376285#comment-15376285 ] Yu Li commented on HBASE-16195: --- Found below problem of integration with Hadoop-1.1, will give an addendum to fix it: {noformat} [ERROR] /home/jenkins/jenkins-slave/workspace/HBase-0.98-on-Hadoop-1.1/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreLAB.java:[167,8] error: cannot find symbol {noformat} > Should not add chunk into chunkQueue if not using chunk pool in > HeapMemStoreLAB > --- > > Key: HBASE-16195 > URL: https://issues.apache.org/jira/browse/HBASE-16195 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.1.5, 1.2.2, 0.98.20 >Reporter: Yu Li >Assignee: Yu Li > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 0.98.21, 1.2.3 > > Attachments: HBASE-16195.patch, HBASE-16195_v2.patch, > HBASE-16195_v3.patch, HBASE-16195_v4.patch, HBASE-16195_v4.patch > > > Problem description and analysis please refer to HBASE-16193 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376277#comment-15376277 ] Hadoop QA commented on HBASE-16209: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 44s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 28s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 15s {color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 112m 38s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 156m 32s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-server | | | Futile attempt to change max pool size of ScheduledThreadPoolExecutor in new org.apache.hadoop.hbase.master.AssignmentManager(MasterServices, ServerManager, LoadBalancer, ExecutorService, MetricsMaster, TableLockManager, TableStateManager) At AssignmentManager.java:pool size of ScheduledThreadPoolExecutor in new org.apache.hadoop.hbase.master.AssignmentManager(MasterServices, ServerManager, LoadBalancer, ExecutorService, MetricsMaster, TableLockManager, TableStateManager) At AssignmentManager.java:[line 251] | | Failed junit tests | hadoop.hbase.master.procedure.TestMasterFailo
[jira] [Commented] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool
[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376250#comment-15376250 ] Hudson commented on HBASE-16095: SUCCESS: Integrated in HBase-1.3-IT #755 (See [https://builds.apache.org/job/HBase-1.3-IT/755/]) HBASE-16095 Add priority to TableDescriptor and priority region open (enis: rev ab1e0dd440ee53e03d0ebfa8f0f0b27d585880a5) * hbase-client/src/main/java/org/apache/hadoop/hbase/executor/ExecutorType.java * hbase-client/src/main/java/org/apache/hadoop/hbase/executor/EventType.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java * hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenPriorityRegionHandler.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionOpen.java * hbase-server/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * hbase-client/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java > Add priority to TableDescriptor and priority region open thread pool > > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21 > > Attachments: HBASE-16095-0.98.patch, HBASE-16095-0.98.patch, > hbase-16095_v0.patch, hbase-16095_v1.patch, hbase-16095_v2.patch, > hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376252#comment-15376252 ] Hudson commented on HBASE-16227: SUCCESS: Integrated in HBase-1.2 #672 (See [https://builds.apache.org/job/HBase-1.2/672/]) HBASE-16227 [Shell] Column value formatter not working in scans. Tested (appy: rev 54f2d9df2d65d75d129bdf3bb5debaa20bd238f1) * hbase-shell/src/main/ruby/hbase/table.rb > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3 > > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376232#comment-15376232 ] Hudson commented on HBASE-16227: SUCCESS: Integrated in HBase-1.4 #286 (See [https://builds.apache.org/job/HBase-1.4/286/]) HBASE-16227 [Shell] Column value formatter not working in scans. Tested (appy: rev 8cf6adae7280cc9ae9c1d55c2023497d626a4d64) * hbase-shell/src/main/ruby/hbase/table.rb > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3 > > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376223#comment-15376223 ] Hadoop QA commented on HBASE-16209: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 24s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 30m 35s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 43s {color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 43s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 146m 16s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-server | | | Futile attempt to change max pool size of ScheduledThreadPoolExecutor in new org.apache.hadoop.hbase.master.AssignmentManager(MasterServices, ServerManager, LoadBalancer, ExecutorService, MetricsMaster, TableLockManager, TableStateManager) At AssignmentManager.java:pool size of ScheduledThreadPoolExecutor in new org.apache.hadoop.hbase.master.AssignmentManager(MasterServices, ServerManager, LoadBalancer, ExecutorService, MetricsMaster, TableLockManager, TableStateManager) At AssignmentManager.java:[line 251] | | Failed junit tests | hadoop.hbase.master.TestMasterStatusServlet | \\
[jira] [Commented] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376221#comment-15376221 ] Hudson commented on HBASE-16227: FAILURE: Integrated in HBase-1.3 #783 (See [https://builds.apache.org/job/HBase-1.3/783/]) HBASE-16227 [Shell] Column value formatter not working in scans. Tested (appy: rev 3ff9a458d9557dcad8452f1ed10452b5d16df9b3) * hbase-shell/src/main/ruby/hbase/table.rb > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3 > > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14479) Apply the Leader/Followers pattern to RpcServer's Reader
[ https://issues.apache.org/jira/browse/HBASE-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376201#comment-15376201 ] Hiroshi Ikeda commented on HBASE-14479: --- bq. The doRunLoop will doRead for each key gotten on a select. Reader.doRunLoop calls doRead(key) once for each key selected, and doRead calls Connection.readAndProcess() once for each call. readAndProcess reads and processes at most one request from a socket for each call. Actually, readAndProcess prepares a buffer whose size is just equal to the request's one, and reads data and calls process(). That means, doRunLoop processes at most one request for each key selected, and the following request is required to be selected again in order to be processed. That would be good if clients claimed one request at a time. Moreover, that naturally implements round-robin behavior for registered channels in the selector. But that is subtle for asynchronous multiple requests via one socket because of overhead including unnecessarily calling Selector.select(). If SASL is used and the request contains multiple substantial requests, all of them are processed in processUnwrappedData with while loop. > Apply the Leader/Followers pattern to RpcServer's Reader > > > Key: HBASE-14479 > URL: https://issues.apache.org/jira/browse/HBASE-14479 > Project: HBase > Issue Type: Improvement > Components: IPC/RPC, Performance >Reporter: Hiroshi Ikeda >Assignee: Hiroshi Ikeda >Priority: Minor > Attachments: HBASE-14479-V2 (1).patch, HBASE-14479-V2.patch, > HBASE-14479-V2.patch, HBASE-14479.patch, flamegraph-19152.svg, > flamegraph-32667.svg, gc.png, gets.png, io.png, median.png > > > {{RpcServer}} uses multiple selectors to read data for load distribution, but > the distribution is just done by round-robin. It is uncertain, especially for > long run, whether load is equally divided and resources are used without > being wasted. > Moreover, multiple selectors may cause excessive context switches which give > priority to low latency (while we just add the requests to queues), and it is > possible to reduce throughput of the whole server. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15473) Documentation for the usage of hbase dataframe user api (JSON, Avro, etc)
[ https://issues.apache.org/jira/browse/HBASE-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376200#comment-15376200 ] Hudson commented on HBASE-15473: FAILURE: Integrated in HBase-Trunk_matrix #1224 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1224/]) HBASE-15473: Documentation for the usage of hbase dataframe user api (mstanleyjones: rev 86f37686278fa211e349da9e544eed5e1d3288c5) * src/main/asciidoc/_chapters/spark.adoc > Documentation for the usage of hbase dataframe user api (JSON, Avro, etc) > - > > Key: HBASE-15473 > URL: https://issues.apache.org/jira/browse/HBASE-15473 > Project: HBase > Issue Type: Sub-task > Components: documentation, spark >Reporter: Zhan Zhang >Assignee: Weiqing Yang >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-15473_v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16224) Reduce the number of RPCs for the large PUTs
[ https://issues.apache.org/jira/browse/HBASE-16224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ChiaPing Tsai updated HBASE-16224: -- Status: Patch Available (was: Open) > Reduce the number of RPCs for the large PUTs > > > Key: HBASE-16224 > URL: https://issues.apache.org/jira/browse/HBASE-16224 > Project: HBase > Issue Type: Improvement >Reporter: ChiaPing Tsai >Priority: Minor > Attachments: HBASE-16224-v1.patch, HBASE-16224-v2.patch, > HBASE-16224-v3.patch > > > This patch is proposed to reduce the number of RPC for the large PUTs > The number and data size of write thread(SingleServerRequestRunnable) is a > result of three main factors: > 1) The flush size taken by BufferedMutatorImpl#backgroundFlushCommits > 2) The limit of task number > 3) ClientBackoffPolicy > A lot of threads created with less MUTATIONs is a result of two reason: 1) > many regions of target table are in different server. 2) flush size in step > one is summed by “all” server rather than “individual” server > This patch removes the limit of flush size in step one and add maximum size > to submit for each server in the AsyncProcess -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16224) Reduce the number of RPCs for the large PUTs
[ https://issues.apache.org/jira/browse/HBASE-16224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ChiaPing Tsai updated HBASE-16224: -- Attachment: HBASE-16224-v3.patch add a trivial change in hbase-server module > Reduce the number of RPCs for the large PUTs > > > Key: HBASE-16224 > URL: https://issues.apache.org/jira/browse/HBASE-16224 > Project: HBase > Issue Type: Improvement >Reporter: ChiaPing Tsai >Priority: Minor > Attachments: HBASE-16224-v1.patch, HBASE-16224-v2.patch, > HBASE-16224-v3.patch > > > This patch is proposed to reduce the number of RPC for the large PUTs > The number and data size of write thread(SingleServerRequestRunnable) is a > result of three main factors: > 1) The flush size taken by BufferedMutatorImpl#backgroundFlushCommits > 2) The limit of task number > 3) ClientBackoffPolicy > A lot of threads created with less MUTATIONs is a result of two reason: 1) > many regions of target table are in different server. 2) flush size in step > one is summed by “all” server rather than “individual” server > This patch removes the limit of flush size in step one and add maximum size > to submit for each server in the AsyncProcess -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16224) Reduce the number of RPCs for the large PUTs
[ https://issues.apache.org/jira/browse/HBASE-16224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ChiaPing Tsai updated HBASE-16224: -- Status: Open (was: Patch Available) > Reduce the number of RPCs for the large PUTs > > > Key: HBASE-16224 > URL: https://issues.apache.org/jira/browse/HBASE-16224 > Project: HBase > Issue Type: Improvement >Reporter: ChiaPing Tsai >Priority: Minor > Attachments: HBASE-16224-v1.patch, HBASE-16224-v2.patch, > HBASE-16224-v3.patch > > > This patch is proposed to reduce the number of RPC for the large PUTs > The number and data size of write thread(SingleServerRequestRunnable) is a > result of three main factors: > 1) The flush size taken by BufferedMutatorImpl#backgroundFlushCommits > 2) The limit of task number > 3) ClientBackoffPolicy > A lot of threads created with less MUTATIONs is a result of two reason: 1) > many regions of target table are in different server. 2) flush size in step > one is summed by “all” server rather than “individual” server > This patch removes the limit of flush size in step one and add maximum size > to submit for each server in the AsyncProcess -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15305) Fix a couple of incorrect anchors in HBase ref guide
[ https://issues.apache.org/jira/browse/HBASE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376179#comment-15376179 ] Hadoop QA commented on HBASE-15305: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 41s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 19s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 7s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 10s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 31m 39s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 37s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 176m 52s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 230m 19s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.replication.TestMasterReplication | | | hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures | | | hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient | | | hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12790010/HBASE-15298-v1.patch | | JIRA Issue | HBASE-15305 | | Optional Tests | asflicense javac javadoc unit | | uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 86f3768 | | Default Java | 1.7.0_80 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/2627/artifact/patchprocess/patch-unit-root.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/2627/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/2627/testReport/ | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/2627/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > Fix a couple of incorrect anchors in HBase ref guide > > > Key: HBASE-15305 > URL: https://issues.apache.org/jira/browse/HBASE-15305 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Misty Stanley-Jones >Assignee: Misty Stanley-Jones > Fix For: 2.0.0 > > Attachments: HBASE-15305-v2.patch > > > From HBASE-15298: > {quote} > After this patch is applied, there are still two missing asciidoc anchors, > distributed.log.splitting and fail.fast.expired.active.master. These are > related to features removed by HBASE-14053 and HBASE-10569. I th
[jira] [Updated] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool
[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-16095: --- Attachment: HBASE-16095-0.98.patch Here's another version of the 0.98 patch that keeps the value of HIGH_QOS the same and adds ADMIN_QOS as half of that, like we do on later branches. > Add priority to TableDescriptor and priority region open thread pool > > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21 > > Attachments: HBASE-16095-0.98.patch, HBASE-16095-0.98.patch, > hbase-16095_v0.patch, hbase-16095_v1.patch, hbase-16095_v2.patch, > hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool
[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-16095: --- Attachment: HBASE-16095-0.98.patch Attaching what I am going to push to 0.98 after tests check out. New units pass. I brought back HConstants.ADMIN_QOS for functional parity with later branches when using this feature even though ADMIN_QOS isn't used elsewhere in 0.98. > Add priority to TableDescriptor and priority region open thread pool > > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21 > > Attachments: HBASE-16095-0.98.patch, hbase-16095_v0.patch, > hbase-16095_v1.patch, hbase-16095_v2.patch, hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool
[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-16095: --- Fix Version/s: 0.98.21 > Add priority to TableDescriptor and priority region open thread pool > > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21 > > Attachments: hbase-16095_v0.patch, hbase-16095_v1.patch, > hbase-16095_v2.patch, hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16169: - Attachment: HBASE-16169.master.003.patch > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15305) Fix a couple of incorrect anchors in HBase ref guide
[ https://issues.apache.org/jira/browse/HBASE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376167#comment-15376167 ] Hadoop QA commented on HBASE-15305: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 43s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 18s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 59s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 48s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 36m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 18s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 48s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 159m 27s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures | | Timed out junit tests | org.apache.hadoop.hbase.client.TestMetaWithReplicas | | | org.apache.hadoop.hbase.client.TestFromClientSide3 | | | org.apache.hadoop.hbase.client.TestAdmin1 | | | org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient | | | org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12817820/HBASE-15305-v2.patch | | JIRA Issue | HBASE-15305 | | Optional Tests | asflicense javac javadoc unit | | uname | Linux priapus.apache.org 3.13.0-86-generic #131-Ubuntu SMP Thu May 12 23:33:13 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 86f3768 | | Default Java | 1.7.0_80 | | Multi-JDK versions | /usr/local/jenkins/java/jdk1.8.0:1.8.0 /home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/2629/artifact/patchprocess/patch-unit-root.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/2629/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/2629/testReport/ | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/2629/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > Fix a couple of incorrect anchors in HBase ref guide > > > Key: HBASE-15305 > URL: https://issues.apache.org/jira/browse/HBASE-15305 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Misty Stanley-Jones >Assignee: Misty Stanley-Jones > Fix For: 2.0.0 > > Attachments: HBASE-15305-v2.patch > > > From HBASE-15298: > {quote} > After this patch is applied, there are still two missing asciidoc anchors, > d
[jira] [Commented] (HBASE-16225) Refactor ScanQueryMatcher
[ https://issues.apache.org/jira/browse/HBASE-16225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376163#comment-15376163 ] Duo Zhang commented on HBASE-16225: --- [~apurtell] That will be awesome, but honestly I do not know if it could work right now... Let me split the current {{ScanQueryMatcher}} to several classes first... Thanks. > Refactor ScanQueryMatcher > - > > Key: HBASE-16225 > URL: https://issues.apache.org/jira/browse/HBASE-16225 > Project: HBase > Issue Type: Improvement >Reporter: Duo Zhang > > As said in HBASE-16223, the code of {{ScanQueryMatcher}} is too complicated. > I suggest that we can abstract an interface and implement several sub classes > which separate different logic into different implementations. For example, > the requirements of compaction and user scan are different, now we also need > to consider the logic of user scan even if we only want to add a logic for > compaction. And at least, the raw scan does not need a query matcher... we > can implement a dummy query matcher for it. > Suggestions are welcomed. Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-3727) MultiHFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376162#comment-15376162 ] Hadoop QA commented on HBASE-3727: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 2s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 26m 23s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 11s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 139m 26s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12817827/HBASE-3727-V5.patch | | JIRA Issue | HBASE-3727 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 86f3768 | | Default Java | 1.7.0_80 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 | |
[jira] [Updated] (HBASE-14813) REST documentation under package.html should go to the book
[ https://issues.apache.org/jira/browse/HBASE-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-14813: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thank you! > REST documentation under package.html should go to the book > --- > > Key: HBASE-14813 > URL: https://issues.apache.org/jira/browse/HBASE-14813 > Project: HBase > Issue Type: Improvement > Components: documentation, REST >Reporter: Enis Soztutar >Assignee: Misty Stanley-Jones > Attachments: HBASE-14813-v1.patch, HBASE-14813.patch > > > It seems that we have more up to date and better documentation under > {{hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/package.html}} than > in the book. We should merge these two. The package.html is only accessible > if you know where to look. > [~misty] FYI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool
[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-16095: -- Resolution: Fixed Fix Version/s: 1.3.0 Status: Resolved (was: Patch Available) > Add priority to TableDescriptor and priority region open thread pool > > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0 > > Attachments: hbase-16095_v0.patch, hbase-16095_v1.patch, > hbase-16095_v2.patch, hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool
[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376147#comment-15376147 ] Enis Soztutar commented on HBASE-16095: --- Thanks Andrew for the clarification. I've just committed this with a small change: {code} +if (htd.getPriority() >= HConstants.ADMIN_QOS || region.getTable().isSystemTable()) { + regionServer.service.submit(new OpenPriorityRegionHandler( {code} so that we use the priority handler pool for system tables that are not meta. > Add priority to TableDescriptor and priority region open thread pool > > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.4.0 > > Attachments: hbase-16095_v0.patch, hbase-16095_v1.patch, > hbase-16095_v2.patch, hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool
[ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-16095: -- Attachment: hbase-16095_v3.patch > Add priority to TableDescriptor and priority region open thread pool > > > Key: HBASE-16095 > URL: https://issues.apache.org/jira/browse/HBASE-16095 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.4.0 > > Attachments: hbase-16095_v0.patch, hbase-16095_v1.patch, > hbase-16095_v2.patch, hbase-16095_v3.patch > > > This is in the similar area with HBASE-15816, and also required with the > current secondary indexing for Phoenix. > The problem with P secondary indexes is that data table regions depend on > index regions to be able to make progress. Possible distributed deadlocks can > be prevented via custom RpcScheduler + RpcController configuration via > HBASE-11048 and PHOENIX-938. However, region opening also has the same > deadlock situation, because data region open has to replay the WAL edits to > the index regions. There is only 1 thread pool to open regions with 3 workers > by default. So if the cluster is recovering / restarting from scratch, the > deadlock happens because some index regions cannot be opened due to them > being in the same queue waiting for data regions to open (which waits for > RPC'ing to index regions which is not open). This is reproduced in almost all > Phoenix secondary index clusters (mutable table w/o transactions) that we > see. > The proposal is to have a "high priority" region opening thread pool, and > have the HTD carry the relative priority of a table. This maybe useful for > other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they > want some specific tables to become online faster. > As a follow up patch, we can also take a look at how this priority > information can be used by the rpc scheduler on the server side or rpc > controller on the client side, so that we do not have to set priorities > manually per-operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.
[ https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376137#comment-15376137 ] Sai Teja Ranuva commented on HBASE-16210: - [~enis] No change in the plan as such as of now. I am just trying to get the patch in by breaking it into smaller logical parts. > Add Timestamp class to the hbase-common and Timestamp type to HTable. > - > > Key: HBASE-16210 > URL: https://issues.apache.org/jira/browse/HBASE-16210 > Project: HBase > Issue Type: Sub-task >Reporter: Sai Teja Ranuva >Assignee: Sai Teja Ranuva >Priority: Minor > Labels: patch, testing > Attachments: HBASE-16210.master.1.patch, HBASE-16210.master.2.patch, > HBASE-16210.master.3.patch, HBASE-16210.master.4.patch, > HBASE-16210.master.5.patch, HBASE-16210.master.6.patch > > > This is a sub-issue of > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is > a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. > The main idea of HLC is described in > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with > the motivation of adding it to HBase. > What is this patch/issue about ? > This issue attempts to add a timestamp class to hbase-common and timestamp > type to HTable. > This is a part of the attempt to get HLC into HBase. This patch does not > interfere with the current working of HBase. > Why Timestamp Class ? > Timestamp class can be as an abstraction to represent time in Hbase in 64 > bits. > It is just used for manipulating with the 64 bits of the timestamp and is not > concerned about the actual time. > There are three types of timestamps. System time, Custom and HLC. Each one of > it has methods to manipulate the 64 bits of timestamp. > HTable changes: Added a timestamp type property to HTable. This will help > HBase exist in conjunction with old type of timestamp and also the HLC which > will be introduced. The default is set to custom timestamp(current way of > usage of timestamp). default unset timestamp is also custom timestamp as it > should be so. The default timestamp will be changed to HLC when HLC feature > is introduced completely in HBase. > Check HBASE-16210.master.6.patch. > Suggestions are welcome. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376133#comment-15376133 ] Joseph commented on HBASE-16209: I just fixed the original bug in the Web UI and uploaded a new patch. > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Status: Patch Available (was: Open) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: HBASE-16209.patch > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Status: Open (was: Patch Available) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: (was: HBASE-16209.patch) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: HBASE-16209.patch > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: (was: HBASE-16209.patch) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14813) REST documentation under package.html should go to the book
[ https://issues.apache.org/jira/browse/HBASE-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376117#comment-15376117 ] Hadoop QA commented on HBASE-14813: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 13s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 23s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 12s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s {color} | {color:green} master passed with JDK v1.7.0_80 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 17s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 6s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 53s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 20s {color} | {color:green} hbase-rest in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 102m 48s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 157m 10s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.snapshot.TestFlushSnapshotFromClient | | Timed out junit tests | org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite | | | org.apache.hadoop.hbase.io.asyncfs.TestSaslFanOutOneBlockAsyncDFSOutput | | | org.apache.hadoop.hbase.io.encoding.TestEncodedSeekers | | | org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput | | | org.apache.hadoop.hbase.io.encoding.TestChangingEncoding | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12817817/HBASE-14813-v1.patch | | JIRA Issue | HBASE-14813 | | Optional Tests | asflicense javac javadoc unit | | uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 86f3768 | | Default Java | 1.7.0_80 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/2628/artifact/patchprocess/patch-unit-root.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/2628/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/2628/testReport/ | | modules | C: hbase-rest . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/2628/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > REST documentation under package.html should go to the book > -
[jira] [Commented] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.
[ https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376116#comment-15376116 ] Enis Soztutar commented on HBASE-16210: --- [~saitejar] what is the plan here? Are you working on getting the patch up to date and compiling? Any divergence from the original plan and approach? > Add Timestamp class to the hbase-common and Timestamp type to HTable. > - > > Key: HBASE-16210 > URL: https://issues.apache.org/jira/browse/HBASE-16210 > Project: HBase > Issue Type: Sub-task >Reporter: Sai Teja Ranuva >Assignee: Sai Teja Ranuva >Priority: Minor > Labels: patch, testing > Attachments: HBASE-16210.master.1.patch, HBASE-16210.master.2.patch, > HBASE-16210.master.3.patch, HBASE-16210.master.4.patch, > HBASE-16210.master.5.patch, HBASE-16210.master.6.patch > > > This is a sub-issue of > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is > a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. > The main idea of HLC is described in > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with > the motivation of adding it to HBase. > What is this patch/issue about ? > This issue attempts to add a timestamp class to hbase-common and timestamp > type to HTable. > This is a part of the attempt to get HLC into HBase. This patch does not > interfere with the current working of HBase. > Why Timestamp Class ? > Timestamp class can be as an abstraction to represent time in Hbase in 64 > bits. > It is just used for manipulating with the 64 bits of the timestamp and is not > concerned about the actual time. > There are three types of timestamps. System time, Custom and HLC. Each one of > it has methods to manipulate the 64 bits of timestamp. > HTable changes: Added a timestamp type property to HTable. This will help > HBase exist in conjunction with old type of timestamp and also the HLC which > will be introduced. The default is set to custom timestamp(current way of > usage of timestamp). default unset timestamp is also custom timestamp as it > should be so. The default timestamp will be changed to HLC when HLC feature > is introduced completely in HBase. > Check HBASE-16210.master.6.patch. > Suggestions are welcome. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16228) Add read/write HDFS size metrics for flush/compact/handler
[ https://issues.apache.org/jira/browse/HBASE-16228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] binlijin updated HBASE-16228: - Attachment: HBASE-16228.patch > Add read/write HDFS size metrics for flush/compact/handler > -- > > Key: HBASE-16228 > URL: https://issues.apache.org/jira/browse/HBASE-16228 > Project: HBase > Issue Type: New Feature >Reporter: binlijin > Attachments: HBASE-16228.patch > > > Flush/Compact/Handler and other threads read or write to HDFS, we can get > this metrics to see read/write amplification in hbase, and test how a > compaction algorithm will affect it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16228) Add read/write HDFS size metrics for flush/compact/handler
binlijin created HBASE-16228: Summary: Add read/write HDFS size metrics for flush/compact/handler Key: HBASE-16228 URL: https://issues.apache.org/jira/browse/HBASE-16228 Project: HBase Issue Type: New Feature Reporter: binlijin Flush/Compact/Handler and other threads read or write to HDFS, we can get this metrics to see read/write amplification in hbase, and test how a compaction algorithm will affect it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16117) Fix Connection leak in mapred.TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-16117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376055#comment-15376055 ] Hadoop QA commented on HBASE-16117: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 53s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s {color} | {color:green} branch-1 passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 51s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s {color} | {color:green} branch-1 passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 15m 29s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 32s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 45s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 124m 58s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.util.TestMergeTool | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12817812/HBASE-16117.branch-1.001.patch | | JIRA Issue | HBASE-16117 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlat
[jira] [Commented] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376041#comment-15376041 ] Joseph commented on HBASE-16209: There appears to be a pre-existing bug with the Web UI and RegionTransitions. I will try to fix that bug in this fix, but it would be nice if I could get some feedback on the AssignmentManager code. Thanks! > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Description: Related to HBASE-16138. As of now we currently have no pause between retrying failed region open requests. And with a low maximumAttempt default, we can quickly use up all our regionOpen retries if the server is in a bad state. I added in a ExponentialBackOffPolicy so that we spread out the timing of our open region retries in AssignmentManager. Review board at https://reviews.apache.org/r/50011/ (was: Related to HBASE-16138. As of now we currently have no pause between retrying failed region open requests. And with a low maximumAttempt default, we can quickly use up all our regionOpen retries if the server is in a bad state. I added in a ExponentialBackOffPolicy so that we spread out the timing of our open region retries in AssignmentManager. ) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: HBASE-16209.patch > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Status: Patch Available (was: Open) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376036#comment-15376036 ] Hudson commented on HBASE-16227: SUCCESS: Integrated in HBase-1.2-IT #553 (See [https://builds.apache.org/job/HBase-1.2-IT/553/]) HBASE-16227 [Shell] Column value formatter not working in scans. Tested (appy: rev 54f2d9df2d65d75d129bdf3bb5debaa20bd238f1) * hbase-shell/src/main/ruby/hbase/table.rb > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3 > > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16014) Get and Put constructor argument lists are divergent
[ https://issues.apache.org/jira/browse/HBASE-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-16014: - Assignee: brandboat (was: Nick Dimiduk) > Get and Put constructor argument lists are divergent > > > Key: HBASE-16014 > URL: https://issues.apache.org/jira/browse/HBASE-16014 > Project: HBase > Issue Type: Bug >Reporter: Nick Dimiduk >Assignee: brandboat > > API for construing Get and Put objects for a specific rowkey is quite > different. > [Put|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html#constructor_summary] > supports many more variations for specifying the target rowkey and timestamp > compared to > [Get|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#constructor_summary]. > Notably lacking are {{Get(byte[], int, int)}} and {{Get(ByteBuffer)}} > variations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Description: Related to HBASE-16138. As of now we currently have no pause between retrying failed region open requests. And with a low maximumAttempt default, we can quickly use up all our regionOpen retries if the server is in a bad state. I added in a ExponentialBackOffPolicy so that we spread out the timing of our open region retries in AssignmentManager. (was: Related to HBASE-16138. As of now we currently have no pause between retrying failed region open requests. And with a low maximumAttempt default, we can quickly use up all our regionOpen retries if the server is in a bad state. I added in a ExponentialBackOffPolicy so that we spread out the timing of our open region retries in AssignmentManager. Posted a diff review at https://reviews.apache.org/r/50011/) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Status: Open (was: Patch Available) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Posted a diff review at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: (was: HBASE-16209.patch) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Posted a diff review at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Description: Related to HBASE-16138. As of now we currently have no pause between retrying failed region open requests. And with a low maximumAttempt default, we can quickly use up all our regionOpen retries if the server is in a bad state. I added in a ExponentialBackOffPolicy so that we spread out the timing of our open region retries in AssignmentManager. Posted a diff review at https://reviews.apache.org/r/50011/ (was: Related to HBASE-16138. As of now we currently have no pause between retrying failed region open requests. And with a low maximumAttempt default, we can quickly use up all our regionOpen retries if the server is in a bad state. I added in a ExponentialBackOffPolicy so that we spread out the timing of our open region retries in AssignmentManager. ) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Posted a diff review at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Status: Patch Available (was: Open) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: (was: HBASE-16209.patch) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: HBASE-16209.patch > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: HBASE-16209.patch > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: (was: HBASE-16209.patch) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: HBASE-16209.patch > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375992#comment-15375992 ] Hudson commented on HBASE-16227: SUCCESS: Integrated in HBase-1.3-IT #754 (See [https://builds.apache.org/job/HBase-1.3-IT/754/]) HBASE-16227 [Shell] Column value formatter not working in scans. Tested (appy: rev 3ff9a458d9557dcad8452f1ed10452b5d16df9b3) * hbase-shell/src/main/ruby/hbase/table.rb > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3 > > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: (was: HBASE-16209.patch) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Attachment: HBASE-16209.patch > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > Attachments: HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Description: Related to HBASE-16138. As of now we currently have no pause between retrying failed region open requests. And with a low maximumAttempt default, we can quickly use up all our regionOpen retries if the server is in a bad state. I added in a ExponentialBackOffPolicy so that we spread out the timing of our open region retries in AssignmentManager. (was: Currently inside of AssignmentManager, we only retry opening a failed-open non-meta region maximumAttempt times before just forgetting about it forever. There is no point in limiting the retries to maximumAttempt though, we should just try opening the region as long as AssignmentManager is alive.) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375986#comment-15375986 ] Hudson commented on HBASE-16227: FAILURE: Integrated in HBase-Trunk_matrix #1223 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1223/]) HBASE-16227 [Shell] Column value formatter not working in scans. Tested (appy: rev 28802decc80eabe4711e9ce6595209d07e8514f2) * hbase-shell/src/main/ruby/hbase/table.rb > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3 > > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16209: --- Summary: Provide an ExponentialBackOffPolicy sleep between failed region open requests (was: Retry opening a failed-open region indefinitely in AssignmentManager) > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Joseph > > Currently inside of AssignmentManager, we only retry opening a failed-open > non-meta region maximumAttempt times before just forgetting about it forever. > There is no point in limiting the retries to maximumAttempt though, we should > just try opening the region as long as AssignmentManager is alive. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-3727) MultiHFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yi liang updated HBASE-3727: Attachment: HBASE-3727-V5.patch > MultiHFileOutputFormat > -- > > Key: HBASE-3727 > URL: https://issues.apache.org/jira/browse/HBASE-3727 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0 >Reporter: Andrew Purtell >Assignee: yi liang >Priority: Minor > Attachments: HBASE-3727-V3.patch, HBASE-3727-V4.patch, > HBASE-3727-V5.patch, MH2.patch, MultiHFileOutputFormat.java, > MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, > TestMultiHFileOutputFormat.java > > > Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an > IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on > demand that produce HFiles in per-table subdirectories of the configured > output path. Does not currently support partitioning for existing tables / > incremental update. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14813) REST documentation under package.html should go to the book
[ https://issues.apache.org/jira/browse/HBASE-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375950#comment-15375950 ] Sean Busbey commented on HBASE-14813: - +1 > REST documentation under package.html should go to the book > --- > > Key: HBASE-14813 > URL: https://issues.apache.org/jira/browse/HBASE-14813 > Project: HBase > Issue Type: Improvement > Components: documentation, REST >Reporter: Enis Soztutar >Assignee: Misty Stanley-Jones > Attachments: HBASE-14813-v1.patch, HBASE-14813.patch > > > It seems that we have more up to date and better documentation under > {{hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/package.html}} than > in the book. We should merge these two. The package.html is only accessible > if you know where to look. > [~misty] FYI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16226) Thrift server memory leak
[ https://issues.apache.org/jira/browse/HBASE-16226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375943#comment-15375943 ] Ted Yu commented on HBASE-16226: Have you seen HBASE-3852 ? > Thrift server memory leak > - > > Key: HBASE-16226 > URL: https://issues.apache.org/jira/browse/HBASE-16226 > Project: HBase > Issue Type: Bug > Components: Thrift >Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.2.0, 1.3.0 >Reporter: Ashu Pachauri > > Thrift servers maintain a scanner map which holds scanner references. We > clean up those references only when a closeScanner call is received from the > client, which means that we never clean up for scanners that failed or if the > client did not close the scanner explicitly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15305) Fix a couple of incorrect anchors in HBase ref guide
[ https://issues.apache.org/jira/browse/HBASE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-15305: Attachment: (was: HBASE-15305.patch) > Fix a couple of incorrect anchors in HBase ref guide > > > Key: HBASE-15305 > URL: https://issues.apache.org/jira/browse/HBASE-15305 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Misty Stanley-Jones >Assignee: Misty Stanley-Jones > Fix For: 2.0.0 > > Attachments: HBASE-15305-v2.patch > > > From HBASE-15298: > {quote} > After this patch is applied, there are still two missing asciidoc anchors, > distributed.log.splitting and fail.fast.expired.active.master. These are > related to features removed by HBASE-14053 and HBASE-10569. I think these > anchors(and related texts) should be handled by someone who knows those > issues well, so I retain them. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15305) Fix a couple of incorrect anchors in HBase ref guide
[ https://issues.apache.org/jira/browse/HBASE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-15305: Attachment: (was: HBASE-15298-v1.patch) > Fix a couple of incorrect anchors in HBase ref guide > > > Key: HBASE-15305 > URL: https://issues.apache.org/jira/browse/HBASE-15305 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Misty Stanley-Jones >Assignee: Misty Stanley-Jones > Fix For: 2.0.0 > > Attachments: HBASE-15305-v2.patch > > > From HBASE-15298: > {quote} > After this patch is applied, there are still two missing asciidoc anchors, > distributed.log.splitting and fail.fast.expired.active.master. These are > related to features removed by HBASE-14053 and HBASE-10569. I think these > anchors(and related texts) should be handled by someone who knows those > issues well, so I retain them. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15305) Fix a couple of incorrect anchors in HBase ref guide
[ https://issues.apache.org/jira/browse/HBASE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-15305: Attachment: HBASE-15305-v2.patch Please disregard the patches older than today. > Fix a couple of incorrect anchors in HBase ref guide > > > Key: HBASE-15305 > URL: https://issues.apache.org/jira/browse/HBASE-15305 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Misty Stanley-Jones >Assignee: Misty Stanley-Jones > Fix For: 2.0.0 > > Attachments: HBASE-15305-v2.patch > > > From HBASE-15298: > {quote} > After this patch is applied, there are still two missing asciidoc anchors, > distributed.log.splitting and fail.fast.expired.active.master. These are > related to features removed by HBASE-14053 and HBASE-10569. I think these > anchors(and related texts) should be handled by someone who knows those > issues well, so I retain them. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14813) REST documentation under package.html should go to the book
[ https://issues.apache.org/jira/browse/HBASE-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-14813: Attachment: HBASE-14813-v1.patch Rebased on something recent, and added the protobuf accept header. > REST documentation under package.html should go to the book > --- > > Key: HBASE-14813 > URL: https://issues.apache.org/jira/browse/HBASE-14813 > Project: HBase > Issue Type: Improvement > Components: documentation, REST >Reporter: Enis Soztutar >Assignee: Misty Stanley-Jones > Attachments: HBASE-14813-v1.patch, HBASE-14813.patch > > > It seems that we have more up to date and better documentation under > {{hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/package.html}} than > in the book. We should merge these two. The package.html is only accessible > if you know where to look. > [~misty] FYI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14813) REST documentation under package.html should go to the book
[ https://issues.apache.org/jira/browse/HBASE-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375891#comment-15375891 ] Sean Busbey commented on HBASE-14813: - The stuff in the package version talks about {{x-protobuf}} as a valid {{Accept}} header in several places, and the book talked about it in only for "put multiple values". After the change, we don't seem to talk about it at all. Can we add it generally to the top, or if it doesn't work in all contexts just on those endpoints where it does work? > REST documentation under package.html should go to the book > --- > > Key: HBASE-14813 > URL: https://issues.apache.org/jira/browse/HBASE-14813 > Project: HBase > Issue Type: Improvement > Components: documentation, REST >Reporter: Enis Soztutar >Assignee: Misty Stanley-Jones > Attachments: HBASE-14813.patch > > > It seems that we have more up to date and better documentation under > {{hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/package.html}} than > in the book. We should merge these two. The package.html is only accessible > if you know where to look. > [~misty] FYI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15305) Fix a couple of incorrect anchors in HBase ref guide
[ https://issues.apache.org/jira/browse/HBASE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375888#comment-15375888 ] Dima Spivak commented on HBASE-15305: - On line 773, methinks it should be {{hbase_default_configurations}}, no? > Fix a couple of incorrect anchors in HBase ref guide > > > Key: HBASE-15305 > URL: https://issues.apache.org/jira/browse/HBASE-15305 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Misty Stanley-Jones >Assignee: Misty Stanley-Jones > Fix For: 2.0.0 > > Attachments: HBASE-15298-v1.patch, HBASE-15305.patch > > > From HBASE-15298: > {quote} > After this patch is applied, there are still two missing asciidoc anchors, > distributed.log.splitting and fail.fast.expired.active.master. These are > related to features removed by HBASE-14053 and HBASE-10569. I think these > anchors(and related texts) should be handled by someone who knows those > issues well, so I retain them. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16117) Fix Connection leak in mapred.TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-16117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-16117: --- Attachment: HBASE-16117.branch-1.001.patch > Fix Connection leak in mapred.TableOutputFormat > > > Key: HBASE-16117 > URL: https://issues.apache.org/jira/browse/HBASE-16117 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 2.0.0, 1.3.0, 1.2.2, 1.1.6 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0, 1.1.6, 1.3.1, 1.2.3 > > Attachments: HBASE-16117.branch-1.001.patch, > hbase-16117.branch-1.patch, hbase-16117.patch, hbase-16117.v2.branch-1.patch, > hbase-16117.v2.patch, hbase-16117.v3.branch-1.patch, hbase-16117.v3.patch, > hbase-16117.v4.patch > > > Spark seems to instantiate multiple instances of output formats within a > single process. When mapred.TableOutputFormat (not > mapreduce.TableOutputFormat) is used, this may cause connection leaks that > slowly exhaust the cluster's zk connections. > This patch fixes that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14813) REST documentation under package.html should go to the book
[ https://issues.apache.org/jira/browse/HBASE-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375876#comment-15375876 ] Sean Busbey commented on HBASE-14813: - {code} >From 95a5290cec19a511d3f1d44536f107e0d437d760 Mon Sep 17 00:00:00 2001 From: saitejar Date: Wed, 20 Apr 2016 17:41:53 -0700 Subject: [PATCH 1/2] added quotes to prevent errors --- src/main/asciidoc/_chapters/developer.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc index 0b284bb..4091833 100644 --- a/src/main/asciidoc/_chapters/developer.adoc +++ b/src/main/asciidoc/_chapters/developer.adoc @@ -105,7 +105,7 @@ We encourage you to have this formatter in place in eclipse when editing HBase c .Procedure: Load the HBase Formatter Into Eclipse . Open the menu item. -. In Preferences, click the menu item. +. In Preferences, Go to `Java->Code Style->Formatter`. . Click btn:[Import] and browse to the location of the _hbase_eclipse_formatter.xml_ file, which is in the _dev-support/_ directory. Click btn:[Apply]. . Still in Preferences, click . -- 2.7.4 (Apple Git-66) {code} Seems unrelated? > REST documentation under package.html should go to the book > --- > > Key: HBASE-14813 > URL: https://issues.apache.org/jira/browse/HBASE-14813 > Project: HBase > Issue Type: Improvement > Components: documentation, REST >Reporter: Enis Soztutar >Assignee: Misty Stanley-Jones > Attachments: HBASE-14813.patch > > > It seems that we have more up to date and better documentation under > {{hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/package.html}} than > in the book. We should merge these two. The package.html is only accessible > if you know where to look. > [~misty] FYI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15473) Documentation for the usage of hbase dataframe user api (JSON, Avro, etc)
[ https://issues.apache.org/jira/browse/HBASE-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-15473: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) LGTM, committed to master. Thanks for this large revision, [~Weiqing Yang] > Documentation for the usage of hbase dataframe user api (JSON, Avro, etc) > - > > Key: HBASE-15473 > URL: https://issues.apache.org/jira/browse/HBASE-15473 > Project: HBase > Issue Type: Sub-task > Components: documentation, spark >Reporter: Zhan Zhang >Assignee: Weiqing Yang >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-15473_v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-16183: Status: Open (was: Patch Available) Please re-create and re-attach the patch. > Correct errors in example program of coprocessor in Ref Guide > - > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide
[ https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375859#comment-15375859 ] Misty Stanley-Jones commented on HBASE-16183: - This patch does not adhere to the HBase patch guidelines. Please re-create the patch using {{git format-patch}} as outlined in http://hbase.apache.org/book.html#committing.patches. This makes it much easier for a committer to apply the patch and give you credit for your work. > Correct errors in example program of coprocessor in Ref Guide > - > > Key: HBASE-16183 > URL: https://issues.apache.org/jira/browse/HBASE-16183 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16183-master-v1.patch, HBASE-16183.patch > > > There are some errors in the example programs for coprocessor in Ref Guide. > Such as using deprecated APIs, generic... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16219) Move meta bootstrap out of HMaster
[ https://issues.apache.org/jira/browse/HBASE-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375856#comment-15375856 ] Stephen Yuan Jiang commented on HBASE-16219: LGTM > Move meta bootstrap out of HMaster > -- > > Key: HBASE-16219 > URL: https://issues.apache.org/jira/browse/HBASE-16219 > Project: HBase > Issue Type: Sub-task > Components: master, Region Assignment >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-16219-v0.patch > > > another cleanup to have a smaller integration patch for the new AM. > Trying to isolate the Assignment code from the HMaster. > Move all the bootstrap code to split meta logs and assign meta regions from > HMaster to a MasterMetaBootstrap class to also reduce the long > finishActiveMasterInitialization() method -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375803#comment-15375803 ] Hadoop QA commented on HBASE-16227: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s {color} | {color:blue} rubocop was not available. {color} | | {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s {color} | {color:blue} Ruby-lint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 27s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} branch-1 passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 15m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 31s {color} | {color:green} hbase-shell in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 7s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 21s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12817801/HBASE-16227.branch-1.001.patch | | JIRA Issue | HBASE-16227 | | Optional Tests | asflicense javac javadoc unit rubocop ruby_lint | | uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | branch-1 / 95d141f | | Default Java | 1.7.0_80 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/2625/testReport/ | | modules | C: hbase-shell U: hbase-shell | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/2625/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3 > > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(
[jira] [Commented] (HBASE-16117) Fix Connection leak in mapred.TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-16117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375796#comment-15375796 ] Jonathan Hsieh commented on HBASE-16117: Not a blocker (if you don't mind a minor api semantics change in a maintenance release .. though I'm looking into that based on your review now.) > Fix Connection leak in mapred.TableOutputFormat > > > Key: HBASE-16117 > URL: https://issues.apache.org/jira/browse/HBASE-16117 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 2.0.0, 1.3.0, 1.2.2, 1.1.6 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0, 1.1.6, 1.3.1, 1.2.3 > > Attachments: hbase-16117.branch-1.patch, hbase-16117.patch, > hbase-16117.v2.branch-1.patch, hbase-16117.v2.patch, > hbase-16117.v3.branch-1.patch, hbase-16117.v3.patch, hbase-16117.v4.patch > > > Spark seems to instantiate multiple instances of output formats within a > single process. When mapred.TableOutputFormat (not > mapreduce.TableOutputFormat) is used, this may cause connection leaks that > slowly exhaust the cluster's zk connections. > This patch fixes that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-16227: - Fix Version/s: 1.2.3 1.3.0 > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3 > > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-16227: - Resolution: Fixed Status: Resolved (was: Patch Available) > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3 > > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-16227: - Fix Version/s: 1.4.0 > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-16227: - Fix Version/s: 2.0.0 > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0 > > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375768#comment-15375768 ] Matteo Bertozzi commented on HBASE-16227: - +1 > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-16227: - Attachment: HBASE-16227.branch-1.001.patch > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-16227.branch-1.001.patch, > HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-16227: - Status: Patch Available (was: Open) > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-16227: - Attachment: HBASE-16227.master.001.patch > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-16227.master.001.patch > > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16227) [Shell] Column value formatter not working in scans
[ https://issues.apache.org/jira/browse/HBASE-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-16227: - Description: {noformat} hbase(main):003:0> create 't2', 'f' Created table t2 Took 1.2750 seconds hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" Took 0.0680 seconds hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } ROW COLUMN+CELL row column=f:x, timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 1 row(s) Took 0.0070 seconds {noformat} The value should instead be some number. Caused by HBASE-5980 With the patch {noformat} hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } ROW COLUMN+CELL row column=f:x, timestamp=1468443538145, value=2497413 1 row(s) {noformat} was: {noformat} hbase(main):003:0> create 't2', 'f' Created table t2 Took 1.2750 seconds hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" Took 0.0680 seconds hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } ROW COLUMN+CELL row column=f:x, timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 1 row(s) Took 0.0070 seconds {noformat} The value should instead be some number. Caused by HBASE-5980 > [Shell] Column value formatter not working in scans > --- > > Key: HBASE-16227 > URL: https://issues.apache.org/jira/browse/HBASE-16227 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > > {noformat} > hbase(main):003:0> create 't2', 'f' > Created table t2 > Took 1.2750 seconds > hbase(main):004:0> put 't2', 'row', 'f:x', "\x00\x00\x00\x00\x00&\x1B\x85" > Took 0.0680 seconds > hbase(main):005:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=\x00\x00\x00\x00\x00&\x1B\x85 > 1 row(s) > Took 0.0070 seconds > {noformat} > The value should instead be some number. > Caused by HBASE-5980 > With the patch > {noformat} > hbase(main):001:0> scan 't2', { COLUMNS => 'f:x:toLong' } > ROW COLUMN+CELL > row column=f:x, > timestamp=1468443538145, value=2497413 > 1 row(s) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)