[jira] [Created] (HBASE-24345) [ACL] renameRSGroup should require Admin level permission
Reid Chan created HBASE-24345: - Summary: [ACL] renameRSGroup should require Admin level permission Key: HBASE-24345 URL: https://issues.apache.org/jira/browse/HBASE-24345 Project: HBase Issue Type: Improvement Components: acl, rsgroup Reporter: Reid Chan Assignee: Reid Chan -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24345) [ACL] renameRSGroup should require Admin level permission
[ https://issues.apache.org/jira/browse/HBASE-24345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-24345: -- Priority: Major (was: Blocker) > [ACL] renameRSGroup should require Admin level permission > - > > Key: HBASE-24345 > URL: https://issues.apache.org/jira/browse/HBASE-24345 > Project: HBase > Issue Type: Improvement > Components: acl, rsgroup >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23887) BlockCache performance improve by reduce eviction rate
[ https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danil Lipovoy updated HBASE-23887: -- Description: Hi! I first time here, correct me please if something wrong. I want propose how to improve performance when data in HFiles much more than BlockChache (usual story in BigData). The idea - caching only part of DATA blocks. It is good becouse LruBlockCache starts to work and save huge amount of GC. Sometimes we have more data than can fit into BlockCache and it is cause a high rate of evictions. In this case we can skip cache a block N and insted cache the N+1th block. Anyway we would evict N block quite soon and that why that skipping good for performance. Example: Imagine we have little cache, just can fit only 1 block and we are trying to read 3 blocks with offsets: 124 198 223 Current way - we put the block 124, then put 198, evict 124, put 223, evict 198. A lot of work (5 actions). With the feature - last few digits evenly distributed from 0 to 99. When we divide by modulus we got: 124 -> 24 198 -> 98 223 -> 23 It helps to sort them. Some part, for example below 50 (if we set *hbase.lru.cache.data.block.percent* = 50) go into the cache. And skip others. It means we will not try to handle the block 198 and save CPU for other job. In the result - we put block 124, then put 223, evict 124 (3 actions). See the picture in attachment with test below. Requests per second is higher, GC is lower. The key point of the code: Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 100 But if we set it 1-99, then will work the next logic: {code:java} public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) { if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) return; ... // the same code as usual } {code} Other parameters help to control when this logic will be enabled. It means it will work only while heavy reading going on. hbase.lru.cache.heavy.eviction.count.limit - set how many times have to run eviction process that start to avoid of putting data to BlockCache hbase.lru.cache.heavy.eviction.bytes.size.limit - set how many bytes have to evicted each time that start to avoid of putting data to BlockCache By default: if 10 times (100 secunds) evicted more than 10 MB (each time) then we start to skip 50% of data blocks. When heavy evitions process end then new logic off and will put into BlockCache all blocks again. Descriptions of the test: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. 4 RegionServers 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) Total BlockCache Size = 48 Gb (8 % of data in HFiles) Random read in 20 threads I am going to make Pull Request, hope it is right way to make some contribution in this cool product. was: Hi! I first time here, correct me please if something wrong. I want propose how to improve performance when data in HFiles much more than BlockChache (usual story in BigData). The idea - caching only part of DATA blocks. It is good becouse LruBlockCache starts to work and save huge amount of GC. Sometimes we have more data than can fit into BlockCache and it is cause a high rate of evictions. In this case we can skip cache a block N and insted cache the N+1th block. Anyway we would evict N block quite soon and that why that skipping good for performance. See the picture in attachment with test below. Requests per second is higher, GC is lower. The key point of the code: Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 100 But if we set it 1-99, then will work the next logic: {code:java} public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) { if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) return; ... // the same code as usual } {code} Other parameters help to control when this logic will be enabled. hbase.lru.cache.heavy.eviction.count.limit - set how many times have to run eviction process that start to avoid of putting data to BlockCache hbase.lru.cache.heavy.eviction.bytes.size.limit - set how many bytes have to evicted each time that start to avoid of putting data to BlockCache By default: if 10 times (100 secunds) evicted more than 10 MB (each time) then we start to skip 50% of data blocks. When heavy evitions process end then it will put into BlockCache all blocks again. Descriptions of the test: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. 4 RegionServers 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) Total BlockCache Size = 48 Gb (8 % of data in HFiles) Random read in 20 threads I am going to make Pull Request,
[jira] [Updated] (HBASE-24345) [ACL] renameRSGroup should require Admin level permission
[ https://issues.apache.org/jira/browse/HBASE-24345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-24345: -- Description: Currently renameRSgroup can be called by anyone without permission > [ACL] renameRSGroup should require Admin level permission > - > > Key: HBASE-24345 > URL: https://issues.apache.org/jira/browse/HBASE-24345 > Project: HBase > Issue Type: Improvement > Components: acl, rsgroup >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > > Currently renameRSgroup can be called by anyone without permission -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23887) BlockCache performance improve by reduce eviction rate
[ https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danil Lipovoy updated HBASE-23887: -- Description: Hi! I first time here, correct me please if something wrong. I want propose how to improve performance when data in HFiles much more than BlockChache (usual story in BigData). The idea - caching only part of DATA blocks. It is good becouse LruBlockCache starts to work and save huge amount of GC. Sometimes we have more data than can fit into BlockCache and it is cause a high rate of evictions. In this case we can skip cache a block N and insted cache the N+1th block. Anyway we would evict N block quite soon and that why that skipping good for performance. See the picture in attachment with test below. Requests per second is higher, GC is lower. The key point of the code: Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 100 But if we set it 1-99, then will work the next logic: {code:java} public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) { if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) return; ... // the same code as usual } {code} Other parameters help to control when this logic will be enabled. hbase.lru.cache.heavy.eviction.count.limit - set how many times have to run eviction process that start to avoid of putting data to BlockCache hbase.lru.cache.heavy.eviction.bytes.size.limit - set how many bytes have to evicted each time that start to avoid of putting data to BlockCache By default: if 10 times (100 secunds) evicted more than 10 MB (each time) then we start to skip 50% of data blocks. When heavy evitions process end then it will put into BlockCache all blocks again. Descriptions of the test: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. 4 RegionServers 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) Total BlockCache Size = 48 Gb (8 % of data in HFiles) Random read in 20 threads I am going to make Pull Request, hope it is right way to make some contribution in this cool product. was: Hi! I first time here, correct me please if something wrong. I want propose how to improve performance when data in HFiles much more than BlockChache (usual story in BigData). The idea - caching only part of DATA blocks. It is good becouse LruBlockCache starts to work and save huge amount of GC. Sometimes we have more data than can fit into BlockCache and it is cause a high rate of evictions. In this case we can skip cache a block N and insted cache the N+1th block. Anyway we would evict N block quite soon and that why that skipping good for performance. See the picture in attachment with test below. Requests per second is higher, GC is lower. The key point of the code: Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 100 But if we set it 1-99, then will work the next logic: {code:java} public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) { if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) return; ... // the same code as usual } {code} Descriptions of the test: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. 4 RegionServers 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) Total BlockCache Size = 48 Gb (8 % of data in HFiles) Random read in 20 threads I am going to make Pull Request, hope it is right way to make some contribution in this cool product. > BlockCache performance improve by reduce eviction rate > -- > > Key: HBASE-23887 > URL: https://issues.apache.org/jira/browse/HBASE-23887 > Project: HBase > Issue Type: Improvement > Components: BlockCache, Performance >Reporter: Danil Lipovoy >Priority: Minor > Attachments: 1582787018434_rs_metrics.jpg, > 1582801838065_rs_metrics_new.png, BC_LongRun.png, cmp.png, > evict_BC100_vs_BC23.png, read_requests_100pBC_vs_23pBC.png > > > Hi! > I first time here, correct me please if something wrong. > I want propose how to improve performance when data in HFiles much more than > BlockChache (usual story in BigData). The idea - caching only part of DATA > blocks. It is good becouse LruBlockCache starts to work and save huge amount > of GC. > Sometimes we have more data than can fit into BlockCache and it is cause a > high rate of evictions. In this case we can skip cache a block N and insted > cache the N+1th block. Anyway we would evict N block quite soon and that why > that skipping good for performance. > See the picture in attachment with test below. Requests
[jira] [Updated] (HBASE-23887) BlockCache performance improve by reduce eviction rate
[ https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danil Lipovoy updated HBASE-23887: -- Description: Hi! I first time here, correct me please if something wrong. I want propose how to improve performance when data in HFiles much more than BlockChache (usual story in BigData). The idea - caching only part of DATA blocks. It is good becouse LruBlockCache starts to work and save huge amount of GC. Sometimes we have more data than can fit into BlockCache and it is cause a high rate of evictions. In this case we can skip cache a block N and insted cache the N+1th block. Anyway we would evict N block quite soon and that why that skipping good for performance. See the picture in attachment with test below. Requests per second is higher, GC is lower. The key point of the code: Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 100 But if we set it 1-99, then will work the next logic: {code:java} public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) { if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) return; ... // the same code as usual } {code} Descriptions of the test: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. 4 RegionServers 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) Total BlockCache Size = 48 Gb (8 % of data in HFiles) Random read in 20 threads I am going to make Pull Request, hope it is right way to make some contribution in this cool product. was: Hi! I first time here, correct me please if something wrong. I want propose how to improve performance when data in HFiles much more than BlockChache (usual story in BigData). The idea - caching only part of DATA blocks. It is good becouse LruBlockCache starts to work and save huge amount of GC. Sometimes we have more data than can fit into BlockCache and it is cause a high rate of evictions. , you choose to not cache a block which we would have otherwise chosen to cache under the assumption that this "churn": to cache the N+1th block, we have to evict the Nth block. See the picture in attachment with test below. Requests per second is higher, GC is lower. The key point of the code: Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 100 But if we set it 0-99, then will work the next logic: {code:java} public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) { if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) return; ... // the same code as usual } {code} Descriptions of the test: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. 4 RegionServers 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) Total BlockCache Size = 48 Gb (8 % of data in HFiles) Random read in 20 threads I am going to make Pull Request, hope it is right way to make some contribution in this cool product. > BlockCache performance improve by reduce eviction rate > -- > > Key: HBASE-23887 > URL: https://issues.apache.org/jira/browse/HBASE-23887 > Project: HBase > Issue Type: Improvement > Components: BlockCache, Performance >Reporter: Danil Lipovoy >Priority: Minor > Attachments: 1582787018434_rs_metrics.jpg, > 1582801838065_rs_metrics_new.png, BC_LongRun.png, cmp.png, > evict_BC100_vs_BC23.png, read_requests_100pBC_vs_23pBC.png > > > Hi! > I first time here, correct me please if something wrong. > I want propose how to improve performance when data in HFiles much more than > BlockChache (usual story in BigData). The idea - caching only part of DATA > blocks. It is good becouse LruBlockCache starts to work and save huge amount > of GC. > Sometimes we have more data than can fit into BlockCache and it is cause a > high rate of evictions. In this case we can skip cache a block N and insted > cache the N+1th block. Anyway we would evict N block quite soon and that why > that skipping good for performance. > See the picture in attachment with test below. Requests per second is higher, > GC is lower. > > The key point of the code: > Added the parameter: *hbase.lru.cache.data.block.percent* which by default = > 100 > > But if we set it 1-99, then will work the next logic: > > > {code:java} > public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean > inMemory) { > if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) > if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) > return; > ... > // the same code as usual > } > {code} >
[jira] [Updated] (HBASE-23887) BlockCache performance improve by reduce eviction rate
[ https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danil Lipovoy updated HBASE-23887: -- Description: Hi! I first time here, correct me please if something wrong. I want propose how to improve performance when data in HFiles much more than BlockChache (usual story in BigData). The idea - caching only part of DATA blocks. It is good becouse LruBlockCache starts to work and save huge amount of GC. Sometimes we have more data than can fit into BlockCache and it is cause a high rate of evictions. , you choose to not cache a block which we would have otherwise chosen to cache under the assumption that this "churn": to cache the N+1th block, we have to evict the Nth block. See the picture in attachment with test below. Requests per second is higher, GC is lower. The key point of the code: Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 100 But if we set it 0-99, then will work the next logic: {code:java} public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) { if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) return; ... // the same code as usual } {code} Descriptions of the test: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. 4 RegionServers 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) Total BlockCache Size = 48 Gb (8 % of data in HFiles) Random read in 20 threads I am going to make Pull Request, hope it is right way to make some contribution in this cool product. was: НГHi! I first time here, correct me please if something wrong. I want propose how to improve performance when data in HFiles much more than BlockChache (usual story in BigData). The idea - caching only part of DATA blocks. It is good becouse LruBlockCache starts to work and save huge amount of GC. See the picture in attachment with test below. Requests per second is higher, GC is lower. The key point of the code: Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 100 But if we set it 0-99, then will work the next logic: {code:java} public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) { if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) return; ... // the same code as usual } {code} Descriptions of the test: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. 4 RegionServers 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) Total BlockCache Size = 48 Gb (8 % of data in HFiles) Random read in 20 threads I am going to make Pull Request, hope it is right way to make some contribution in this cool product. > BlockCache performance improve by reduce eviction rate > -- > > Key: HBASE-23887 > URL: https://issues.apache.org/jira/browse/HBASE-23887 > Project: HBase > Issue Type: Improvement > Components: BlockCache, Performance >Reporter: Danil Lipovoy >Priority: Minor > Attachments: 1582787018434_rs_metrics.jpg, > 1582801838065_rs_metrics_new.png, BC_LongRun.png, cmp.png, > evict_BC100_vs_BC23.png, read_requests_100pBC_vs_23pBC.png > > > Hi! > I first time here, correct me please if something wrong. > I want propose how to improve performance when data in HFiles much more than > BlockChache (usual story in BigData). The idea - caching only part of DATA > blocks. It is good becouse LruBlockCache starts to work and save huge amount > of GC. > Sometimes we have more data than can fit into BlockCache and it is cause a > high rate of evictions. > , you choose to not cache a block which we would have otherwise chosen to > cache under the assumption that this "churn": to cache the N+1th block, we > have to evict the Nth block. > See the picture in attachment with test below. Requests per second is higher, > GC is lower. > > The key point of the code: > Added the parameter: *hbase.lru.cache.data.block.percent* which by default = > 100 > > But if we set it 0-99, then will work the next logic: > > > {code:java} > public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean > inMemory) { > if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) > if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) > return; > ... > // the same code as usual > } > {code} > > > Descriptions of the test: > 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. > 4 RegionServers > 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) > Total BlockCache Size = 48 Gb (8 % of data in HFiles) > Random
[jira] [Updated] (HBASE-23887) BlockCache performance improve by reduce eviction rate
[ https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danil Lipovoy updated HBASE-23887: -- Summary: BlockCache performance improve by reduce eviction rate (was: BlockCache performance improve by avoid of chache) > BlockCache performance improve by reduce eviction rate > -- > > Key: HBASE-23887 > URL: https://issues.apache.org/jira/browse/HBASE-23887 > Project: HBase > Issue Type: Improvement > Components: BlockCache, Performance >Reporter: Danil Lipovoy >Priority: Minor > Attachments: 1582787018434_rs_metrics.jpg, > 1582801838065_rs_metrics_new.png, BC_LongRun.png, cmp.png, > evict_BC100_vs_BC23.png, read_requests_100pBC_vs_23pBC.png > > > НГHi! > I first time here, correct me please if something wrong. > I want propose how to improve performance when data in HFiles much more than > BlockChache (usual story in BigData). The idea - caching only part of DATA > blocks. It is good becouse LruBlockCache starts to work and save huge amount > of GC. See the picture in attachment with test below. Requests per second is > higher, GC is lower. > > The key point of the code: > Added the parameter: *hbase.lru.cache.data.block.percent* which by default = > 100 > > But if we set it 0-99, then will work the next logic: > > > {code:java} > public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean > inMemory) { > if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) > if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) > return; > ... > // the same code as usual > } > {code} > > > Descriptions of the test: > 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. > 4 RegionServers > 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) > Total BlockCache Size = 48 Gb (8 % of data in HFiles) > Random read in 20 threads > > I am going to make Pull Request, hope it is right way to make some > contribution in this cool product. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23887) BlockCache performance improve by avoid of chache
[ https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danil Lipovoy updated HBASE-23887: -- Summary: BlockCache performance improve by avoid of chache (was: BlockCache performance improve) > BlockCache performance improve by avoid of chache > - > > Key: HBASE-23887 > URL: https://issues.apache.org/jira/browse/HBASE-23887 > Project: HBase > Issue Type: Improvement > Components: BlockCache, Performance >Reporter: Danil Lipovoy >Priority: Minor > Attachments: 1582787018434_rs_metrics.jpg, > 1582801838065_rs_metrics_new.png, BC_LongRun.png, cmp.png, > evict_BC100_vs_BC23.png, read_requests_100pBC_vs_23pBC.png > > > НГHi! > I first time here, correct me please if something wrong. > I want propose how to improve performance when data in HFiles much more than > BlockChache (usual story in BigData). The idea - caching only part of DATA > blocks. It is good becouse LruBlockCache starts to work and save huge amount > of GC. See the picture in attachment with test below. Requests per second is > higher, GC is lower. > > The key point of the code: > Added the parameter: *hbase.lru.cache.data.block.percent* which by default = > 100 > > But if we set it 0-99, then will work the next logic: > > > {code:java} > public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean > inMemory) { > if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) > if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) > return; > ... > // the same code as usual > } > {code} > > > Descriptions of the test: > 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. > 4 RegionServers > 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) > Total BlockCache Size = 48 Gb (8 % of data in HFiles) > Random read in 20 threads > > I am going to make Pull Request, hope it is right way to make some > contribution in this cool product. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1640: HBASE-24309 Avoid introducing log4j and slf4j-log4j dependencies for modules other than hbase-assembly
Apache-HBase commented on pull request #1640: URL: https://github.com/apache/hbase/pull/1640#issuecomment-625639250 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 43s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 37s | master passed | | +1 :green_heart: | checkstyle | 1m 57s | master passed | | +1 :green_heart: | spotbugs | 22m 38s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 29s | the patch passed | | +1 :green_heart: | checkstyle | 2m 1s | root: The patch generated 0 new + 3 unchanged - 1 fixed = 3 total (was 4) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 32s | The patch has no ill-formed XML file. | | +1 :green_heart: | hadoopcheck | 13m 9s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 29m 13s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 4m 42s | The patch does not generate ASF License warnings. | | | | 97m 19s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1640/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1640 | | Optional Tests | dupname asflicense hadoopcheck xml spotbugs hbaseanti checkstyle | | uname | Linux d9039b087e2b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / fddb2dd65c | | Max. process+thread count | 137 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-metrics-api hbase-metrics hbase-hadoop-compat hbase-client hbase-zookeeper hbase-replication hbase-balancer hbase-http hbase-asyncfs hbase-procedure hbase-server hbase-mapreduce hbase-testing-util hbase-thrift hbase-shell hbase-endpoint hbase-it hbase-rest hbase-examples hbase-shaded hbase-hbtop hbase-assembly hbase-archetypes/hbase-client-project . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1640/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-23832) Old config hbase.hstore.compactionThreshold is ignored
[ https://issues.apache.org/jira/browse/HBASE-23832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102266#comment-17102266 ] Michael Stack commented on HBASE-23832: --- Release note it at least. I see others 'catch' the config on the other side and convert it into whatever the new name is -- see internals of Configuration where it does a bunch of this stuff. > Old config hbase.hstore.compactionThreshold is ignored > -- > > Key: HBASE-23832 > URL: https://issues.apache.org/jira/browse/HBASE-23832 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Sambit Mohapatra >Priority: Critical > > In 2.x we added new name 'hbase.hstore.compaction.min' for this. Still for > compatibility we allow the old config name and honor that in code > {code} > minFilesToCompact = Math.max(2, conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY, > /*old name*/ conf.getInt("hbase.hstore.compactionThreshold", 3))); > {code} > But if hbase.hstore.compactionThreshold alone is configured by user, there is > no impact of that. > This is because in hbase-default.xml we have the new config with a value of > 3. So the call conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY) always return a > value 3 even if it is not explicitly configured by customer and instead used > the old key. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23832) Old config hbase.hstore.compactionThreshold is ignored
[ https://issues.apache.org/jira/browse/HBASE-23832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102264#comment-17102264 ] Sean Busbey commented on HBASE-23832: - we removed and renamed tons of configs in 2.0. IMHO we could just document this one in renamed list. http://hbase.apache.org/book.html#upgrade2.0.removed.configs http://hbase.apache.org/book.html#upgrade2.0.renamed.configs it looks like in this case we wanted to maintain compatibility for HBase 2, so I think it'd also be reasonable to do that. then we can remove it for HBase 3 and release note it like we did for all the things we removed from HBase 2. If we keep it then it would be better if we did the fall back using {{Configuration.addDeprecation}} so that folks get some warning about the coming change. > Old config hbase.hstore.compactionThreshold is ignored > -- > > Key: HBASE-23832 > URL: https://issues.apache.org/jira/browse/HBASE-23832 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Sambit Mohapatra >Priority: Critical > > In 2.x we added new name 'hbase.hstore.compaction.min' for this. Still for > compatibility we allow the old config name and honor that in code > {code} > minFilesToCompact = Math.max(2, conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY, > /*old name*/ conf.getInt("hbase.hstore.compactionThreshold", 3))); > {code} > But if hbase.hstore.compactionThreshold alone is configured by user, there is > no impact of that. > This is because in hbase-default.xml we have the new config with a value of > 3. So the call conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY) always return a > value 3 even if it is not explicitly configured by customer and instead used > the old key. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack commented on a change in pull request #1640: HBASE-24309 Avoid introducing log4j and slf4j-log4j dependencies for modules other than hbase-assembly
saintstack commented on a change in pull request #1640: URL: https://github.com/apache/hbase/pull/1640#discussion_r421937064 ## File path: hbase-assembly/pom.xml ## @@ -318,5 +318,18 @@ jaxws-ri pom + Review comment: What happens when we run in-situ... i.e. build and then do ./bin/start-hbase.sh? Does it produce logs? ## File path: hbase-assembly/pom.xml ## @@ -318,5 +318,18 @@ jaxws-ri pom + Review comment: This is a good note. Should it go elsewhere, in the top-level pom for instance? (Maybe it is there already... Let me keep going) ## File path: hbase-archetypes/hbase-client-project/pom.xml ## @@ -43,31 +43,35 @@ org.apache.hbase hbase-testing-util - ${project.version} test org.apache.hbase hbase-common - ${project.version} Review comment: Good ## File path: hbase-shaded/pom.xml ## @@ -51,15 +51,14 @@ org.apache.hbase hbase-resource-bundle - ${project.version} true - + Review comment: This is targeted at hbase2 which is to run against hadoop2? This comment that it can't be non-optional is fixed now? Not needed? ## File path: pom.xml ## @@ -2861,6 +2917,14 @@ org.codehause.jackson jackson-mapper-asl + + org.slf4j + slf4j-log4j12 + + + log4j + log4j + Review comment: Painful. ## File path: hbase-server/pom.xml ## @@ -481,15 +473,6 @@ httpcore test - - - commons-logging - commons-logging - compile - Review comment: Fixed? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24296) release scripts should install yetus via the docker build
[ https://issues.apache.org/jira/browse/HBASE-24296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102247#comment-17102247 ] Sean Busbey commented on HBASE-24296: - updated subject. getting a homebrew install of yetus across the boundary into the docker container was a huge mess. much easier to just do the install as a step in the dockerfile so that we can cache it there. > release scripts should install yetus via the docker build > - > > Key: HBASE-24296 > URL: https://issues.apache.org/jira/browse/HBASE-24296 > Project: HBase > Issue Type: Improvement > Components: community, scripts >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > > right now we have to download yetus on each release run. we should be able to > point at a local install instead. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24296) release scripts should install yetus via the docker build
[ https://issues.apache.org/jira/browse/HBASE-24296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-24296: Summary: release scripts should install yetus via the docker build (was: release scripts should be able to use an existing yetus install) > release scripts should install yetus via the docker build > - > > Key: HBASE-24296 > URL: https://issues.apache.org/jira/browse/HBASE-24296 > Project: HBase > Issue Type: Improvement > Components: community, scripts >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > > right now we have to download yetus on each release run. we should be able to > point at a local install instead. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24335) Support deleteall with ts but without column in shell mode
[ https://issues.apache.org/jira/browse/HBASE-24335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102240#comment-17102240 ] Hudson commented on HBASE-24335: Results for branch branch-2.3 [build #71 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Support deleteall with ts but without column in shell mode > -- > > Key: HBASE-24335 > URL: https://issues.apache.org/jira/browse/HBASE-24335 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0 > > > The position after rowkey is column, so we can't only specify ts now. > My proposal is use a empty string to represent no column specified. > useage: > deleteall 'test','r1','',158876590 > deleteall 'test', \{ROWPREFIXFILTER => 'prefix'}, '', 158876590 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24342) [Flakey Tests] Disable TestClusterPortAssignment.testClusterPortAssignment as it can't pass 100% of the time
[ https://issues.apache.org/jira/browse/HBASE-24342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102241#comment-17102241 ] Hudson commented on HBASE-24342: Results for branch branch-2.3 [build #71 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [Flakey Tests] Disable TestClusterPortAssignment.testClusterPortAssignment as > it can't pass 100% of the time > > > Key: HBASE-24342 > URL: https://issues.apache.org/jira/browse/HBASE-24342 > Project: HBase > Issue Type: Bug > Components: flakies, test >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > This is a BindException special. We get randomFreePort and then put up the > procesess. > {code} > 2020-05-07 00:30:15,844 INFO [Time-limited test] http.HttpServer(1080): > HttpServer.start() threw a non Bind IOException > java.net.BindException: Port in use: 0.0.0.0:59568 > at > org.apache.hadoop.hbase.http.HttpServer.openListeners(HttpServer.java:1146) > at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:1077) > at org.apache.hadoop.hbase.http.InfoServer.start(InfoServer.java:148) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:2133) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:670) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:511) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:132) > at > org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:239) > at > org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:181) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:245) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:115) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1178) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1142) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1106) > at > org.apache.hadoop.hbase.TestClusterPortAssignment.testClusterPortAssignment(TestClusterPortAssignment.java:57) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
[jira] [Commented] (HBASE-24338) [Flakey Tests] NPE in TestRaceBetweenSCPAndDTP
[ https://issues.apache.org/jira/browse/HBASE-24338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102243#comment-17102243 ] Hudson commented on HBASE-24338: Results for branch branch-2.3 [build #71 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [Flakey Tests] NPE in TestRaceBetweenSCPAndDTP > -- > > Key: HBASE-24338 > URL: https://issues.apache.org/jira/browse/HBASE-24338 > Project: HBase > Issue Type: Bug > Components: flakies, test >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > Seen in local runs -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24328) skip duplicate GCMultipleMergedRegionsProcedure while previous finished
[ https://issues.apache.org/jira/browse/HBASE-24328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102244#comment-17102244 ] Hudson commented on HBASE-24328: Results for branch branch-2.3 [build #71 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > skip duplicate GCMultipleMergedRegionsProcedure while previous finished > --- > > Key: HBASE-24328 > URL: https://issues.apache.org/jira/browse/HBASE-24328 > Project: HBase > Issue Type: Improvement >Reporter: niuyulin >Assignee: niuyulin >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] busbey commented on a change in pull request #1677: HBASE-24313 [DOCS] Document ignoreTimestamps option added to HashTabl…
busbey commented on a change in pull request #1677: URL: https://github.com/apache/hbase/pull/1677#discussion_r421930948 ## File path: src/main/asciidoc/_chapters/ops_mgt.adoc ## @@ -647,6 +660,16 @@ For major 1.x versions, minimum minor release including it is *1.4.10*. For major 2.x versions, minimum minor release including it is *2.1.5*. +.Additional info on ignoreTimestamps +[NOTE] + +"ignoreTimestamps" was only added by +link:https://issues.apache.org/jira/browse/HBASE-24302[HBASE-24302], so it may not be available on +all released versions. +Not available in any 1.x versions. Review comment: could we backport it to branch-1? or is there some implementation reason we can't? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24295) [Chaos Monkey] abstract logging through the class hierarchy
[ https://issues.apache.org/jira/browse/HBASE-24295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102242#comment-17102242 ] Hudson commented on HBASE-24295: Results for branch branch-2.3 [build #71 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/71/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [Chaos Monkey] abstract logging through the class hierarchy > --- > > Key: HBASE-24295 > URL: https://issues.apache.org/jira/browse/HBASE-24295 > Project: HBase > Issue Type: Task > Components: integration tests >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.0 > > > Running chaos monkey and watching the logs, it's very difficult to tell what > actions are actually running. There's lots of shared methods through the > class hierarchy that extends from {{abstract class Action}}, and each class > comes with its own {{Logger}}. As a result, the logs have useless stuff like > {noformat} > INFO actions.Action: Started regionserver... > {noformat} > Add {{protected abstract Logger getLogger()}} to the class's internal > interface, and have the concrete implementations provide their logger. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] gjacoby126 commented on pull request #1655: HBASE-24321 - Add writable MinVersions and read-only Scan to coproc S…
gjacoby126 commented on pull request #1655: URL: https://github.com/apache/hbase/pull/1655#issuecomment-625622291 @Apache9 - I think I've addressed all your comments so far, but when you get a minute please let me know if there's any other changes you'd like, or if this is ready to go. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1624: HBASE-24165 maxPoolSize is logged incorrectly in ByteBufferPool
Apache-HBase commented on pull request #1624: URL: https://github.com/apache/hbase/pull/1624#issuecomment-625621340 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | docker | 0m 57s | Docker failed to build yetus/hbase:d9b4982ad4. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/1624 | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1624/4/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-23832) Old config hbase.hstore.compactionThreshold is ignored
[ https://issues.apache.org/jira/browse/HBASE-23832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102233#comment-17102233 ] Duo Zhang commented on HBASE-23832: --- I think keeping an old config across a whole major release is enough. > Old config hbase.hstore.compactionThreshold is ignored > -- > > Key: HBASE-23832 > URL: https://issues.apache.org/jira/browse/HBASE-23832 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Sambit Mohapatra >Priority: Critical > > In 2.x we added new name 'hbase.hstore.compaction.min' for this. Still for > compatibility we allow the old config name and honor that in code > {code} > minFilesToCompact = Math.max(2, conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY, > /*old name*/ conf.getInt("hbase.hstore.compactionThreshold", 3))); > {code} > But if hbase.hstore.compactionThreshold alone is configured by user, there is > no impact of that. > This is because in hbase-default.xml we have the new config with a value of > 3. So the call conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY) always return a > value 3 even if it is not explicitly configured by customer and instead used > the old key. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24189) Regionserver recreates region folders in HDFS after replaying WAL with removed table entries
[ https://issues.apache.org/jira/browse/HBASE-24189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102230#comment-17102230 ] Anoop Sam John commented on HBASE-24189: Another possible solution way would be this When we open a region, we will be creating the recovered.edits directory under that. So for the WALSplitter to write the edits file under the region, there is ideally no need to create the dirs. At least it dont need to create the region dir. But in code what we do is if the region/recovered.edits dir is not there we will create it using mkdirs. So even if region dir is not there, we will end up creating that. we can avoid doing this mkdirs. And just do INFO log and skip all edits for that region. Sounds like a less risky and simple thing (?) > Regionserver recreates region folders in HDFS after replaying WAL with > removed table entries > > > Key: HBASE-24189 > URL: https://issues.apache.org/jira/browse/HBASE-24189 > Project: HBase > Issue Type: Bug > Components: regionserver, wal >Affects Versions: 2.2.4 > Environment: * HDFS 3.1.3 > * HBase 2.1.4 > * OpenJDK 8 >Reporter: Andrey Elenskiy >Assignee: Anoop Sam John >Priority: Major > > Under the following scenario region directories in HDFS can be recreated with > only recovered.edits in them: > # Create table "test" > # Put into "test" > # Delete table "test" > # Create table "test" again > # Crash the regionserver to which the put has went to force the WAL replay > # Region directory in old table is recreated in new table > # hbase hbck returns inconsistency > This appears to happen due to the fact that WALs are not cleaned up once a > table is deleted and they still contain the edits from old table. I've tried > wal_roll command on the regionserver before crashing it, but it doesn't seem > to help as under some circumstances there are still WAL files around. The > only solution that works consistently is to restart regionserver before > creating the table at step 4 because that triggers log cleanup on startup: > [https://github.com/apache/hbase/blob/f3ee9b8aa37dd30d34ff54cd39fb9b4b6d22e683/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L508|https://github.com/apache/hbase/blob/f3ee9b8aa37dd30d34ff54cd39fb9b4b6d22e683/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L508)] > > Truncating a table also would be a workaround by in our case it's a no-go as > we create and delete tables in our tests which run back to back (create table > in the beginning of the test and delete in the end of the test). > A nice option in our case would be to provide hbase shell utility to force > clean up of log files manually as I realize that it's not really viable to > clean all of those up every time some table is removed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23832) Old config hbase.hstore.compactionThreshold is ignored
[ https://issues.apache.org/jira/browse/HBASE-23832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102225#comment-17102225 ] Anoop Sam John commented on HBASE-23832: I believe we deprecated the config in 1.x.. As per the rule, we can remove the config completely in 3.0 then. But I don't see we ever doing such thing for config names. Should we? What you say? [~stack], [~zhangduo], [~busbey]? It will raise more Qs and not that easy like the API removal. API removal, the user will come to know about the removal. The config name how/whether we should handle usage of old config. I mean even in 3.x. Or we should never remove old config names? > Old config hbase.hstore.compactionThreshold is ignored > -- > > Key: HBASE-23832 > URL: https://issues.apache.org/jira/browse/HBASE-23832 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Sambit Mohapatra >Priority: Critical > > In 2.x we added new name 'hbase.hstore.compaction.min' for this. Still for > compatibility we allow the old config name and honor that in code > {code} > minFilesToCompact = Math.max(2, conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY, > /*old name*/ conf.getInt("hbase.hstore.compactionThreshold", 3))); > {code} > But if hbase.hstore.compactionThreshold alone is configured by user, there is > no impact of that. > This is because in hbase-default.xml we have the new config with a value of > 3. So the call conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY) always return a > value 3 even if it is not explicitly configured by customer and instead used > the old key. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-19577) Use log4j2 instead of log4j for logging
[ https://issues.apache.org/jira/browse/HBASE-19577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19577: -- Summary: Use log4j2 instead of log4j for logging (was: Move off log4j1 as our logging backend.) > Use log4j2 instead of log4j for logging > --- > > Key: HBASE-19577 > URL: https://issues.apache.org/jira/browse/HBASE-19577 > Project: HBase > Issue Type: Sub-task > Components: logging >Reporter: Michael Stack >Assignee: Duo Zhang >Priority: Major > > See HBASE-10092 for discussion. We have inserted slf4j as our frontend. Need > to swap out the 5-year-old log4j1. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23832) Old config hbase.hstore.compactionThreshold is ignored
[ https://issues.apache.org/jira/browse/HBASE-23832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102217#comment-17102217 ] Guanghao Zhang commented on HBASE-23832: +1 > Old config hbase.hstore.compactionThreshold is ignored > -- > > Key: HBASE-23832 > URL: https://issues.apache.org/jira/browse/HBASE-23832 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Sambit Mohapatra >Priority: Critical > > In 2.x we added new name 'hbase.hstore.compaction.min' for this. Still for > compatibility we allow the old config name and honor that in code > {code} > minFilesToCompact = Math.max(2, conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY, > /*old name*/ conf.getInt("hbase.hstore.compactionThreshold", 3))); > {code} > But if hbase.hstore.compactionThreshold alone is configured by user, there is > no impact of that. > This is because in hbase-default.xml we have the new config with a value of > 3. So the call conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY) always return a > value 3 even if it is not explicitly configured by customer and instead used > the old key. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Issue Comment Deleted] (HBASE-23832) Old config hbase.hstore.compactionThreshold is ignored
[ https://issues.apache.org/jira/browse/HBASE-23832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-23832: --- Comment: was deleted (was: How about remove "hbase.hstore.compaction.min" config for hbase-default.xml?) > Old config hbase.hstore.compactionThreshold is ignored > -- > > Key: HBASE-23832 > URL: https://issues.apache.org/jira/browse/HBASE-23832 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Sambit Mohapatra >Priority: Critical > > In 2.x we added new name 'hbase.hstore.compaction.min' for this. Still for > compatibility we allow the old config name and honor that in code > {code} > minFilesToCompact = Math.max(2, conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY, > /*old name*/ conf.getInt("hbase.hstore.compactionThreshold", 3))); > {code} > But if hbase.hstore.compactionThreshold alone is configured by user, there is > no impact of that. > This is because in hbase-default.xml we have the new config with a value of > 3. So the call conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY) always return a > value 3 even if it is not explicitly configured by customer and instead used > the old key. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23832) Old config hbase.hstore.compactionThreshold is ignored
[ https://issues.apache.org/jira/browse/HBASE-23832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102215#comment-17102215 ] Guanghao Zhang commented on HBASE-23832: How about remove "hbase.hstore.compaction.min" config for hbase-default.xml? > Old config hbase.hstore.compactionThreshold is ignored > -- > > Key: HBASE-23832 > URL: https://issues.apache.org/jira/browse/HBASE-23832 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Sambit Mohapatra >Priority: Critical > > In 2.x we added new name 'hbase.hstore.compaction.min' for this. Still for > compatibility we allow the old config name and honor that in code > {code} > minFilesToCompact = Math.max(2, conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY, > /*old name*/ conf.getInt("hbase.hstore.compactionThreshold", 3))); > {code} > But if hbase.hstore.compactionThreshold alone is configured by user, there is > no impact of that. > This is because in hbase-default.xml we have the new config with a value of > 3. So the call conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY) always return a > value 3 even if it is not explicitly configured by customer and instead used > the old key. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24344) Release 2.2.5
Guanghao Zhang created HBASE-24344: -- Summary: Release 2.2.5 Key: HBASE-24344 URL: https://issues.apache.org/jira/browse/HBASE-24344 Project: HBase Issue Type: Umbrella Reporter: Guanghao Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24310) Use Slf4jRequestLog for hbase-http
[ https://issues.apache.org/jira/browse/HBASE-24310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-24310. --- Fix Version/s: 2.3.0 3.0.0-alpha-1 Hadoop Flags: Incompatible change,Reviewed (was: Incompatible change) Resolution: Fixed Pushed to branch-2.3+. Thanks [~stack] for reviewing. > Use Slf4jRequestLog for hbase-http > -- > > Key: HBASE-24310 > URL: https://issues.apache.org/jira/browse/HBASE-24310 > Project: HBase > Issue Type: Sub-task > Components: logging >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > To remove the direct dependency on log4j in hbase-http server. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 commented on pull request #1664: HBASE-24333 Backport HBASE-24304 "Separate a hbase-asyncfs module" to…
Apache9 commented on pull request #1664: URL: https://github.com/apache/hbase/pull/1664#issuecomment-625603841 Tried 'mvn test -Dhadoop.profile=3.0' locally, still no problem... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #1664: HBASE-24333 Backport HBASE-24304 "Separate a hbase-asyncfs module" to…
Apache9 commented on pull request #1664: URL: https://github.com/apache/hbase/pull/1664#issuecomment-625602277 Strange, try maven dependency:tree -Dhadoop.profile=3.0 with jdk11, we do have this dependency for hbase-rest > [INFO] +- org.glassfish.jersey.containers:jersey-container-servlet-core:jar:2.25.1:compile [INFO] | +- org.glassfish.hk2.external:javax.inject:jar:2.5.0-b32:compile [INFO] | \- org.glassfish.jersey.core:jersey-common:jar:2.25.1:compile [INFO] | +- org.glassfish.jersey.bundles.repackaged:jersey-guava:jar:2.25.1:compile [INFO] | \- org.glassfish.hk2:osgi-resource-locator:jar:1.0.1:compile Let me try to run the UTs under hbase-rest locally. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24310) Use Slf4jRequestLog for hbase-http
[ https://issues.apache.org/jira/browse/HBASE-24310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102194#comment-17102194 ] Duo Zhang commented on HBASE-24310: --- Filed HBASE-24343. > Use Slf4jRequestLog for hbase-http > -- > > Key: HBASE-24310 > URL: https://issues.apache.org/jira/browse/HBASE-24310 > Project: HBase > Issue Type: Sub-task > Components: logging >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > To remove the direct dependency on log4j in hbase-http server. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24343) Document how to config request log for master and rs info server
Duo Zhang created HBASE-24343: - Summary: Document how to config request log for master and rs info server Key: HBASE-24343 URL: https://issues.apache.org/jira/browse/HBASE-24343 Project: HBase Issue Type: Sub-task Components: documentation Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24310) Use Slf4jRequestLog for hbase-http
[ https://issues.apache.org/jira/browse/HBASE-24310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102189#comment-17102189 ] Michael Stack commented on HBASE-24310: --- File issue on adding section to refguide on how to config request appender? > Use Slf4jRequestLog for hbase-http > -- > > Key: HBASE-24310 > URL: https://issues.apache.org/jira/browse/HBASE-24310 > Project: HBase > Issue Type: Sub-task > Components: logging >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > To remove the direct dependency on log4j in hbase-http server. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1584: HBASE-24256 When fixOverlap hits the max region limit, it is possible…
Apache-HBase commented on pull request #1584: URL: https://github.com/apache/hbase/pull/1584#issuecomment-625593551 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 3m 9s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 38s | master passed | | +1 :green_heart: | compile | 1m 20s | master passed | | +1 :green_heart: | shadedjars | 6m 52s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 35s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 49s | the patch passed | | +1 :green_heart: | compile | 0m 59s | the patch passed | | +1 :green_heart: | javac | 0m 59s | the patch passed | | +1 :green_heart: | shadedjars | 6m 13s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 205m 43s | hbase-server in the patch passed. | | | | 235m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/6/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1584 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 60c2d1720ecc 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / fc283f7a68 | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/6/testReport/ | | Max. process+thread count | 3054 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/6/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-23938) Replicate slow/large RPC calls to HDFS
[ https://issues.apache.org/jira/browse/HBASE-23938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102179#comment-17102179 ] Anoop Sam John commented on HBASE-23938: When the Mutate ops suffering and gives responseTooSlow, writing those logs to a system table (memstore) making more load on the cluster right? Persisting to an HDFS file is not enough? > Replicate slow/large RPC calls to HDFS > -- > > Key: HBASE-23938 > URL: https://issues.apache.org/jira/browse/HBASE-23938 > Project: HBase > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > Attachments: Screen Shot 2020-05-07 at 12.01.26 AM.png > > > We should provide capability to replicate complete slow and large RPC logs to > HDFS or create new system table in addition to Ring Buffer. This way we don't > lose any of slow logs and operator can retrieve all the slow/large logs. > Replicating logs to HDFS / creating new system table should be configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24250) CatalogJanitor resubmits GCMultipleMergedRegionsProcedure for the same region
[ https://issues.apache.org/jira/browse/HBASE-24250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-24250: -- Fix Version/s: (was: 2.4.0) > CatalogJanitor resubmits GCMultipleMergedRegionsProcedure for the same region > - > > Key: HBASE-24250 > URL: https://issues.apache.org/jira/browse/HBASE-24250 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 2.2.4 > Environment: hdfs 3.1.3 with erasure coding > hbase 2.2.4 >Reporter: Andrey Elenskiy >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > If a lot of regions were merged (due to change of region sizes, for example), > there can be a long backlog of procedures to clean up the merged regions. If > going through this backlog is slower than the CatalogJanitor's scan interval, > it will end resubmitting GCMultipleMergedRegionsProcedure for the same > regions over and over again. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] binlijin commented on pull request #1669: HBASE-24338 [Flakey Tests] NPE in TestRaceBetweenSCPAndDTP
binlijin commented on pull request #1669: URL: https://github.com/apache/hbase/pull/1669#issuecomment-625588491 @saintstack sir, TestRaceBetweenSCPAndTRSP may be have the same problem? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24335) Support deleteall with ts but without column in shell mode
[ https://issues.apache.org/jira/browse/HBASE-24335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102159#comment-17102159 ] Hudson commented on HBASE-24335: Results for branch branch-2 [build #2648 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Support deleteall with ts but without column in shell mode > -- > > Key: HBASE-24335 > URL: https://issues.apache.org/jira/browse/HBASE-24335 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0 > > > The position after rowkey is column, so we can't only specify ts now. > My proposal is use a empty string to represent no column specified. > useage: > deleteall 'test','r1','',158876590 > deleteall 'test', \{ROWPREFIXFILTER => 'prefix'}, '', 158876590 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24328) skip duplicate GCMultipleMergedRegionsProcedure while previous finished
[ https://issues.apache.org/jira/browse/HBASE-24328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102160#comment-17102160 ] Hudson commented on HBASE-24328: Results for branch branch-2 [build #2648 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > skip duplicate GCMultipleMergedRegionsProcedure while previous finished > --- > > Key: HBASE-24328 > URL: https://issues.apache.org/jira/browse/HBASE-24328 > Project: HBase > Issue Type: Improvement >Reporter: niuyulin >Assignee: niuyulin >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24316) GCMulitpleMergedRegionsProcedure is not idempotent
[ https://issues.apache.org/jira/browse/HBASE-24316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102158#comment-17102158 ] Hudson commented on HBASE-24316: Results for branch branch-2 [build #2648 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2648/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > GCMulitpleMergedRegionsProcedure is not idempotent > --- > > Key: HBASE-24316 > URL: https://issues.apache.org/jira/browse/HBASE-24316 > Project: HBase > Issue Type: Bug > Components: proc-v2 >Affects Versions: 3.0.0-alpha-1, 2.3.0, 2.4.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0 > > > Currently deleteMergeQualifiers() is not idempotent. If two > GCMulitpleMergedRegionsProcedures are run for the same merged child region, > the second run will delete the row for the merge region from meta table and > leave a hole. It needs to make sure it only deletes columns with merge > qualifiers. > [https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java#L1849] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24335) Support deleteall with ts but without column in shell mode
[ https://issues.apache.org/jira/browse/HBASE-24335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Wang updated HBASE-24335: --- Release Note: Use a empty string to represent no column specified for deleteall in shell mode. useage: deleteall 'test','r1','',12345 deleteall 'test', {ROWPREFIXFILTER => 'prefix'}, '', 12345 > Support deleteall with ts but without column in shell mode > -- > > Key: HBASE-24335 > URL: https://issues.apache.org/jira/browse/HBASE-24335 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0 > > > The position after rowkey is column, so we can't only specify ts now. > My proposal is use a empty string to represent no column specified. > useage: > deleteall 'test','r1','',158876590 > deleteall 'test', \{ROWPREFIXFILTER => 'prefix'}, '', 158876590 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24335) Support deleteall with ts but without column in shell mode
[ https://issues.apache.org/jira/browse/HBASE-24335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102141#comment-17102141 ] Zheng Wang commented on HBASE-24335: Thanks very much~ > Support deleteall with ts but without column in shell mode > -- > > Key: HBASE-24335 > URL: https://issues.apache.org/jira/browse/HBASE-24335 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0 > > > The position after rowkey is column, so we can't only specify ts now. > My proposal is use a empty string to represent no column specified. > useage: > deleteall 'test','r1','',158876590 > deleteall 'test', \{ROWPREFIXFILTER => 'prefix'}, '', 158876590 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24335) Support deleteall with ts but without column in shell mode
[ https://issues.apache.org/jira/browse/HBASE-24335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Wang updated HBASE-24335: --- Description: The position after rowkey is column, so we can't only specify ts now. My proposal is use a empty string to represent no column specified. useage: deleteall 'test','r1','',158876590 deleteall 'test', \{ROWPREFIXFILTER => 'prefix'}, '', 158876590 was: The position after rowkey is column, so if we can't only specify ts now. My proposal is use a empty string to represent no column specified. useage: deleteall 'test','r1','',158876590 deleteall 'test', \{ROWPREFIXFILTER => 'prefix'}, '', 158876590 > Support deleteall with ts but without column in shell mode > -- > > Key: HBASE-24335 > URL: https://issues.apache.org/jira/browse/HBASE-24335 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0 > > > The position after rowkey is column, so we can't only specify ts now. > My proposal is use a empty string to represent no column specified. > useage: > deleteall 'test','r1','',158876590 > deleteall 'test', \{ROWPREFIXFILTER => 'prefix'}, '', 158876590 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1681: HBASE-23938 : System table hbase:slowlog to store complete slow/large…
Apache-HBase commented on pull request #1681: URL: https://github.com/apache/hbase/pull/1681#issuecomment-625572122 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 38s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 25s | master passed | | +1 :green_heart: | compile | 1m 29s | master passed | | +1 :green_heart: | shadedjars | 6m 31s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 5s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 18s | the patch passed | | +1 :green_heart: | compile | 1m 32s | the patch passed | | +1 :green_heart: | javac | 1m 32s | the patch passed | | +1 :green_heart: | shadedjars | 6m 37s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 12s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 50s | hbase-common in the patch passed. | | -1 :x: | unit | 212m 21s | hbase-server in the patch failed. | | | | 245m 53s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1681 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 5f3a93660925 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / fc283f7a68 | | Default Java | 1.8.0_232 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/testReport/ | | Max. process+thread count | 3609 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1584: HBASE-24256 When fixOverlap hits the max region limit, it is possible…
Apache-HBase commented on pull request #1584: URL: https://github.com/apache/hbase/pull/1584#issuecomment-625571467 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 10s | master passed | | +1 :green_heart: | compile | 1m 3s | master passed | | +1 :green_heart: | shadedjars | 5m 25s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 41s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 4s | the patch passed | | +1 :green_heart: | compile | 1m 3s | the patch passed | | +1 :green_heart: | javac | 1m 3s | the patch passed | | +1 :green_heart: | shadedjars | 5m 26s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 40s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 123m 21s | hbase-server in the patch passed. | | | | 148m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/6/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1584 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 0fb36b4253ee 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / fc283f7a68 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/6/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/6/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/6/testReport/ | | Max. process+thread count | 4338 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/6/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #1664: HBASE-24333 Backport HBASE-24304 "Separate a hbase-asyncfs module" to…
Apache9 commented on pull request #1664: URL: https://github.com/apache/hbase/pull/1664#issuecomment-625563620 Seems we have some problems for hbase-rest in JDK11, missing jersey? I haven't changed the jdk11 stuff, any ideas? @ndimiduk @saintstack Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1664: HBASE-24333 Backport HBASE-24304 "Separate a hbase-asyncfs module" to…
Apache-HBase commented on pull request #1664: URL: https://github.com/apache/hbase/pull/1664#issuecomment-625562370 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 15s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 37s | branch-2 passed | | +1 :green_heart: | compile | 2m 20s | branch-2 passed | | +1 :green_heart: | shadedjars | 4m 50s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 5m 47s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 59s | the patch passed | | +1 :green_heart: | compile | 2m 36s | the patch passed | | +1 :green_heart: | javac | 2m 36s | the patch passed | | +1 :green_heart: | shadedjars | 5m 15s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 5m 20s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 355m 1s | root in the patch passed. | | | | 395m 21s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1664 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d40cfddc9c83 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 735aa8bf9f | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/testReport/ | | Max. process+thread count | 3107 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server hbase-mapreduce hbase-testing-util hbase-thrift hbase-endpoint hbase-rest hbase-examples hbase-assembly hbase-shaded/hbase-shaded-testing-util . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (HBASE-24183) [flakey test] replication.TestAddToSerialReplicationPeer
[ https://issues.apache.org/jira/browse/HBASE-24183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huaxiang Sun reassigned HBASE-24183: Assignee: Huaxiang Sun (was: Hua Xiang) > [flakey test] replication.TestAddToSerialReplicationPeer > > > Key: HBASE-24183 > URL: https://issues.apache.org/jira/browse/HBASE-24183 > Project: HBase > Issue Type: Test > Components: Client >Affects Versions: 3.0.0-alpha-1, 2.3.0, 2.4.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > From both 2.3 and branch-2 flaky test board, it constantly runs into the > following flaky: > > {code:java} > org.apache.hadoop.hbase.replication.TestAddToSerialReplicationPeer.testAddToSerialPeerFailing > for the past 1 build (Since #6069 )Took 15 sec.Error MessageSequence id go > backwards from 122 to 24Stacktracejava.lang.AssertionError: Sequence id go > backwards from 122 to 24 > at > org.apache.hadoop.hbase.replication.TestAddToSerialReplicationPeer.testAddToSerialPeer(TestAddToSerialReplicationPeer.java:176) > Standard Output{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24073) [flakey test] client.TestAsyncRegionAdminApi messed up compaction state.
[ https://issues.apache.org/jira/browse/HBASE-24073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102111#comment-17102111 ] Huaxiang Sun commented on HBASE-24073: -- Just noticed this message, got it, thanks [~ndimiduk]. > [flakey test] client.TestAsyncRegionAdminApi messed up compaction state. > > > Key: HBASE-24073 > URL: https://issues.apache.org/jira/browse/HBASE-24073 > Project: HBase > Issue Type: Test > Components: Compaction >Affects Versions: 3.0.0-alpha-1, 2.3.0, 2.4.0 > Environment: >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > {code:java} > --- > Test set: org.apache.hadoop.hbase.client.TestAsyncRegionAdminApi > --- > Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 268.838 s > <<< FAILURE! - in org.apache.hadoop.hbase.client.TestAsyncRegionAdminApi > org.apache.hadoop.hbase.client.TestAsyncRegionAdminApi.testCompact[0] Time > elapsed: 50.471 s <<< FAILURE! > java.lang.AssertionError: expected: but was: > at > org.apache.hadoop.hbase.client.TestAsyncRegionAdminApi.compactionTest(TestAsyncRegionAdminApi.java:415) > at > org.apache.hadoop.hbase.client.TestAsyncRegionAdminApi.testCompact(TestAsyncRegionAdminApi.java:364) > Another case found during local test: > --- > Test set: org.apache.hadoop.hbase.client.TestAsyncRegionAdminApi > --- > Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 224.399 s > <<< FAILURE! - in org.apache.hadoop.hbase.client.TestAsyncRegionAdminApi > org.apache.hadoop.hbase.client.TestAsyncRegionAdminApi.testCompact[1] Time > elapsed: 30.861 s <<< FAILURE! > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:87) > at org.junit.Assert.assertTrue(Assert.java:42) > at org.junit.Assert.assertTrue(Assert.java:53) > at > org.apache.hadoop.hbase.client.TestAsyncRegionAdminApi.compactionTest(TestAsyncRegionAdminApi.java:444) > at > org.apache.hadoop.hbase.client.TestAsyncRegionAdminApi.testCompact(TestAsyncRegionAdminApi.java:363) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at org.junit.runners.Suite.runChild(Suite.java:128) > at org.junit.runners.Suite.runChild(Suite.java:27) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at
[GitHub] [hbase] Apache-HBase commented on pull request #1681: HBASE-23938 : System table hbase:slowlog to store complete slow/large…
Apache-HBase commented on pull request #1681: URL: https://github.com/apache/hbase/pull/1681#issuecomment-625548314 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 32s | master passed | | +1 :green_heart: | compile | 1m 35s | master passed | | +1 :green_heart: | shadedjars | 5m 48s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 18s | hbase-common in master failed. | | -0 :warning: | javadoc | 0m 45s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 13s | the patch passed | | +1 :green_heart: | compile | 1m 30s | the patch passed | | +1 :green_heart: | javac | 1m 30s | the patch passed | | +1 :green_heart: | shadedjars | 5m 33s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 17s | hbase-common in the patch failed. | | -0 :warning: | javadoc | 0m 39s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 33s | hbase-common in the patch passed. | | -1 :x: | unit | 125m 56s | hbase-server in the patch failed. | | | | 156m 15s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1681 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d136f87dee9a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / fc283f7a68 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/testReport/ | | Max. process+thread count | 4155 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24332) TestJMXListener.setupBeforeClass can fail due to not getting a random port.
[ https://issues.apache.org/jira/browse/HBASE-24332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102101#comment-17102101 ] Michael Stack commented on HBASE-24332: --- [~markrmiller] You could see if reuse will help. The BoundSocketMaker might help? > TestJMXListener.setupBeforeClass can fail due to not getting a random port. > --- > > Key: HBASE-24332 > URL: https://issues.apache.org/jira/browse/HBASE-24332 > Project: HBase > Issue Type: Test > Components: test >Reporter: Mark Robert Miller >Priority: Minor > > [ERROR] Errors: > [ERROR] TestJMXListener.setupBeforeClass:61 » IO Shutting down -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1584: HBASE-24256 When fixOverlap hits the max region limit, it is possible…
Apache-HBase commented on pull request #1584: URL: https://github.com/apache/hbase/pull/1584#issuecomment-625542228 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 0s | master passed | | +1 :green_heart: | checkstyle | 1m 34s | master passed | | +1 :green_heart: | spotbugs | 2m 45s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 29s | the patch passed | | +1 :green_heart: | checkstyle | 1m 19s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 14m 17s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 49s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 41m 31s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/6/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1584 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux ecc1c536fb25 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / fc283f7a68 | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/6/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1664: HBASE-24333 Backport HBASE-24304 "Separate a hbase-asyncfs module" to…
Apache-HBase commented on pull request #1664: URL: https://github.com/apache/hbase/pull/1664#issuecomment-625534247 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 56s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 27s | branch-2 passed | | +1 :green_heart: | compile | 2m 59s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 47s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 41s | hbase-server in branch-2 failed. | | -0 :warning: | javadoc | 0m 19s | hbase-mapreduce in branch-2 failed. | | -0 :warning: | javadoc | 0m 58s | hbase-thrift in branch-2 failed. | | -0 :warning: | javadoc | 0m 22s | hbase-rest in branch-2 failed. | | -0 :warning: | javadoc | 0m 20s | hbase-examples in branch-2 failed. | | -0 :warning: | javadoc | 0m 20s | root in branch-2 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 39s | the patch passed | | +1 :green_heart: | compile | 3m 8s | the patch passed | | +1 :green_heart: | javac | 3m 8s | the patch passed | | +1 :green_heart: | shadedjars | 6m 26s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 16s | hbase-asyncfs in the patch failed. | | -0 :warning: | javadoc | 0m 44s | hbase-server in the patch failed. | | -0 :warning: | javadoc | 0m 20s | hbase-mapreduce in the patch failed. | | -0 :warning: | javadoc | 1m 0s | hbase-thrift in the patch failed. | | -0 :warning: | javadoc | 0m 25s | hbase-rest in the patch failed. | | -0 :warning: | javadoc | 0m 23s | hbase-examples in the patch failed. | | -0 :warning: | javadoc | 0m 18s | root in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 256m 12s | root in the patch failed. | | | | 298m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1664 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 9b3600c4f56d 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 735aa8bf9f | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-mapreduce.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-thrift.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-rest.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-examples.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-root.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-asyncfs.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-mapreduce.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-thrift.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-rest.txt | | javadoc |
[GitHub] [hbase] Apache-HBase commented on pull request #1584: HBASE-24256 When fixOverlap hits the max region limit, it is possible…
Apache-HBase commented on pull request #1584: URL: https://github.com/apache/hbase/pull/1584#issuecomment-625528162 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 2s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 15s | master passed | | +1 :green_heart: | compile | 1m 2s | master passed | | +1 :green_heart: | shadedjars | 6m 2s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 39s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 58s | the patch passed | | +1 :green_heart: | compile | 1m 3s | the patch passed | | +1 :green_heart: | javac | 1m 3s | the patch passed | | +1 :green_heart: | shadedjars | 6m 49s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 231m 45s | hbase-server in the patch passed. | | | | 260m 52s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1584 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 50faad3dfdc1 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2cafe81e9c | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/5/testReport/ | | Max. process+thread count | 2972 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/5/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1682: Backport: HBASE-24273 HBCK's "Orphan Regions on FileSystem" reports regions wit…
Apache-HBase commented on pull request #1682: URL: https://github.com/apache/hbase/pull/1682#issuecomment-625517665 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | docker | 1m 21s | Docker failed to build yetus/hbase:d9b4982ad4. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/1682 | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1682/1/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun opened a new pull request #1682: HBASE-24273 HBCK's "Orphan Regions on FileSystem" reports regions wit…
huaxiangsun opened a new pull request #1682: URL: https://github.com/apache/hbase/pull/1682 …h referenced HFiles (#1613) Signed-off-by: stack This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-site] bharathv opened a new pull request #4: HBASE-24261: Initial version of ASF infra integration configuration
bharathv opened a new pull request #4: URL: https://github.com/apache/hbase-site/pull/4 This is an initial version of the yaml config for ASF infra integration. We might have some hiccups in the beginning but we can iteratively improve until the old (desired) setup is back in place. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1681: HBASE-23938 : System table hbase:slowlog to store complete slow/large…
Apache-HBase commented on pull request #1681: URL: https://github.com/apache/hbase/pull/1681#issuecomment-625509027 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 9s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 42s | master passed | | +1 :green_heart: | checkstyle | 1m 34s | master passed | | +1 :green_heart: | spotbugs | 2m 40s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 28s | the patch passed | | -0 :warning: | checkstyle | 1m 7s | hbase-server: The patch generated 4 new + 148 unchanged - 0 fixed = 152 total (was 148) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 27s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 3m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 26s | The patch does not generate ASF License warnings. | | | | 38m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1681 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 69a6c204a436 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / fc283f7a68 | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1681/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-24250) CatalogJanitor resubmits GCMultipleMergedRegionsProcedure for the same region
[ https://issues.apache.org/jira/browse/HBASE-24250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huaxiang Sun resolved HBASE-24250. -- Fix Version/s: 2.4.0 2.3.0 3.0.0-alpha-1 Resolution: Fixed > CatalogJanitor resubmits GCMultipleMergedRegionsProcedure for the same region > - > > Key: HBASE-24250 > URL: https://issues.apache.org/jira/browse/HBASE-24250 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 2.2.4 > Environment: hdfs 3.1.3 with erasure coding > hbase 2.2.4 >Reporter: Andrey Elenskiy >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0 > > > If a lot of regions were merged (due to change of region sizes, for example), > there can be a long backlog of procedures to clean up the merged regions. If > going through this backlog is slower than the CatalogJanitor's scan interval, > it will end resubmitting GCMultipleMergedRegionsProcedure for the same > regions over and over again. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24250) CatalogJanitor resubmits GCMultipleMergedRegionsProcedure for the same region
[ https://issues.apache.org/jira/browse/HBASE-24250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102039#comment-17102039 ] Huaxiang Sun commented on HBASE-24250: -- Hi [~niuyulin], I pushed your patch to master, branch-2 and branch-2.3, resolving it, thanks for the patch. > CatalogJanitor resubmits GCMultipleMergedRegionsProcedure for the same region > - > > Key: HBASE-24250 > URL: https://issues.apache.org/jira/browse/HBASE-24250 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 2.2.4 > Environment: hdfs 3.1.3 with erasure coding > hbase 2.2.4 >Reporter: Andrey Elenskiy >Assignee: niuyulin >Priority: Major > > If a lot of regions were merged (due to change of region sizes, for example), > there can be a long backlog of procedures to clean up the merged regions. If > going through this backlog is slower than the CatalogJanitor's scan interval, > it will end resubmitting GCMultipleMergedRegionsProcedure for the same > regions over and over again. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] huaxiangsun commented on pull request #1584: HBASE-24256 When fixOverlap hits the max region limit, it is possible…
huaxiangsun commented on pull request #1584: URL: https://github.com/apache/hbase/pull/1584#issuecomment-625501342 added comments and a new unitest. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on a change in pull request #1584: HBASE-24256 When fixOverlap hits the max region limit, it is possible…
huaxiangsun commented on a change in pull request #1584: URL: https://github.com/apache/hbase/pull/1584#discussion_r421798046 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaFixer.java ## @@ -242,17 +243,35 @@ void fixOverlaps(CatalogJanitor.Report report) throws IOException { } List> merges = new ArrayList<>(); SortedSet currentMergeSet = new TreeSet<>(); +HashSet regionsInMergeSet = new HashSet<>(); RegionInfo regionInfoWithlargestEndKey = null; for (Pair pair: overlaps) { if (regionInfoWithlargestEndKey != null) { if (!isOverlap(regionInfoWithlargestEndKey, pair) || currentMergeSet.size() >= maxMergeCount) { - merges.add(currentMergeSet); - currentMergeSet = new TreeSet<>(); + if (currentMergeSet.size() <= 1) { Review comment: added comments. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-24338) [Flakey Tests] NPE in TestRaceBetweenSCPAndDTP
[ https://issues.apache.org/jira/browse/HBASE-24338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-24338. --- Fix Version/s: 2.3.0 3.0.0-alpha-1 Hadoop Flags: Reviewed Resolution: Fixed Pushed to 2.3+. Thanks for reviews [~binlijin] and [~zhangduo] > [Flakey Tests] NPE in TestRaceBetweenSCPAndDTP > -- > > Key: HBASE-24338 > URL: https://issues.apache.org/jira/browse/HBASE-24338 > Project: HBase > Issue Type: Bug > Components: flakies, test >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > Seen in local runs -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23938) Replicate slow/large RPC calls to HDFS
[ https://issues.apache.org/jira/browse/HBASE-23938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102022#comment-17102022 ] Viraj Jasani commented on HBASE-23938: -- Please review: [https://github.com/apache/hbase/pull/1681] > Replicate slow/large RPC calls to HDFS > -- > > Key: HBASE-23938 > URL: https://issues.apache.org/jira/browse/HBASE-23938 > Project: HBase > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > Attachments: Screen Shot 2020-05-07 at 12.01.26 AM.png > > > We should provide capability to replicate complete slow and large RPC logs to > HDFS or create new system table in addition to Ring Buffer. This way we don't > lose any of slow logs and operator can retrieve all the slow/large logs. > Replicating logs to HDFS / creating new system table should be configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23938) Replicate slow/large RPC calls to HDFS
[ https://issues.apache.org/jira/browse/HBASE-23938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated HBASE-23938: - Fix Version/s: 2.3.0 3.0.0-alpha-1 Status: Patch Available (was: In Progress) > Replicate slow/large RPC calls to HDFS > -- > > Key: HBASE-23938 > URL: https://issues.apache.org/jira/browse/HBASE-23938 > Project: HBase > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > Attachments: Screen Shot 2020-05-07 at 12.01.26 AM.png > > > We should provide capability to replicate complete slow and large RPC logs to > HDFS or create new system table in addition to Ring Buffer. This way we don't > lose any of slow logs and operator can retrieve all the slow/large logs. > Replicating logs to HDFS / creating new system table should be configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HBASE-23938) Replicate slow/large RPC calls to HDFS
[ https://issues.apache.org/jira/browse/HBASE-23938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-23938 started by Viraj Jasani. > Replicate slow/large RPC calls to HDFS > -- > > Key: HBASE-23938 > URL: https://issues.apache.org/jira/browse/HBASE-23938 > Project: HBase > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Attachments: Screen Shot 2020-05-07 at 12.01.26 AM.png > > > We should provide capability to replicate complete slow and large RPC logs to > HDFS or create new system table in addition to Ring Buffer. This way we don't > lose any of slow logs and operator can retrieve all the slow/large logs. > Replicating logs to HDFS / creating new system table should be configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24342) [Flakey Tests] Disable TestClusterPortAssignment.testClusterPortAssignment as it can't pass 100% of the time
[ https://issues.apache.org/jira/browse/HBASE-24342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102021#comment-17102021 ] Mark Robert Miller commented on HBASE-24342: Nice! Also in my list. > [Flakey Tests] Disable TestClusterPortAssignment.testClusterPortAssignment as > it can't pass 100% of the time > > > Key: HBASE-24342 > URL: https://issues.apache.org/jira/browse/HBASE-24342 > Project: HBase > Issue Type: Bug > Components: flakies, test >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > This is a BindException special. We get randomFreePort and then put up the > procesess. > {code} > 2020-05-07 00:30:15,844 INFO [Time-limited test] http.HttpServer(1080): > HttpServer.start() threw a non Bind IOException > java.net.BindException: Port in use: 0.0.0.0:59568 > at > org.apache.hadoop.hbase.http.HttpServer.openListeners(HttpServer.java:1146) > at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:1077) > at org.apache.hadoop.hbase.http.InfoServer.start(InfoServer.java:148) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:2133) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:670) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:511) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:132) > at > org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:239) > at > org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:181) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:245) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:115) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1178) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1142) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1106) > at > org.apache.hadoop.hbase.TestClusterPortAssignment.testClusterPortAssignment(TestClusterPortAssignment.java:57) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at > org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:38) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at
[GitHub] [hbase] virajjasani opened a new pull request #1681: HBASE-23938 : System table hbase:slowlog to store complete slow/large…
virajjasani opened a new pull request #1681: URL: https://github.com/apache/hbase/pull/1681 … RPC logs This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1584: HBASE-24256 When fixOverlap hits the max region limit, it is possible…
Apache-HBase commented on pull request #1584: URL: https://github.com/apache/hbase/pull/1584#issuecomment-625484210 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 18s | master passed | | +1 :green_heart: | compile | 1m 5s | master passed | | +1 :green_heart: | shadedjars | 5m 23s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 42s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 59s | the patch passed | | +1 :green_heart: | compile | 1m 3s | the patch passed | | +1 :green_heart: | javac | 1m 3s | the patch passed | | +1 :green_heart: | shadedjars | 5m 24s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 40s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 126m 31s | hbase-server in the patch passed. | | | | 151m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1584 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d0ac89c5ed3a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2cafe81e9c | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/5/testReport/ | | Max. process+thread count | 4379 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/5/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #1669: HBASE-24338 [Flakey Tests] NPE in TestRaceBetweenSCPAndDTP
saintstack commented on a change in pull request #1669: URL: https://github.com/apache/hbase/pull/1669#discussion_r421777321 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/assignment/TestRaceBetweenSCPAndDTP.java ## @@ -21,11 +21,7 @@ import java.util.Optional; import java.util.concurrent.CountDownLatch; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.HBaseClassTestRule; -import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.*; Review comment: Dumb on my part. Let me fix. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1678: Backport: HBASE-24328 skip duplicate GCMultipleMergedRegionsProcedure while pre…
Apache-HBase commented on pull request #1678: URL: https://github.com/apache/hbase/pull/1678#issuecomment-625480436 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 43s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 40s | branch-2.3 passed | | +1 :green_heart: | compile | 0m 56s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 4m 30s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | branch-2.3 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 13s | the patch passed | | +1 :green_heart: | compile | 0m 56s | the patch passed | | +1 :green_heart: | javac | 0m 56s | the patch passed | | +1 :green_heart: | shadedjars | 4m 26s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 35s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 132m 24s | hbase-server in the patch passed. | | | | 153m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1678/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1678 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 926f64576490 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 9dd8c9607c | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1678/1/testReport/ | | Max. process+thread count | 4230 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1678/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk opened a new pull request #1680: Backport "HBASE-24295 [Chaos Monkey] abstract logging through the class hierarchy ; ADDENDUM" to branch-2.3
ndimiduk opened a new pull request #1680: URL: https://github.com/apache/hbase/pull/1680 Signed-off-by: Jan Hentschel This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-24295) [Chaos Monkey] abstract logging through the class hierarchy
[ https://issues.apache.org/jira/browse/HBASE-24295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk resolved HBASE-24295. -- Resolution: Fixed > [Chaos Monkey] abstract logging through the class hierarchy > --- > > Key: HBASE-24295 > URL: https://issues.apache.org/jira/browse/HBASE-24295 > Project: HBase > Issue Type: Task > Components: integration tests >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.0 > > > Running chaos monkey and watching the logs, it's very difficult to tell what > actions are actually running. There's lots of shared methods through the > class hierarchy that extends from {{abstract class Action}}, and each class > comes with its own {{Logger}}. As a result, the logs have useless stuff like > {noformat} > INFO actions.Action: Started regionserver... > {noformat} > Add {{protected abstract Logger getLogger()}} to the class's internal > interface, and have the concrete implementations provide their logger. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24342) [Flakey Tests] Disable TestClusterPortAssignment.testClusterPortAssignment as it can't pass 100% of the time
[ https://issues.apache.org/jira/browse/HBASE-24342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-24342. --- Fix Version/s: 2.3.0 3.0.0-alpha-1 Resolution: Fixed Pushed to branch-2.3+. branch-2 failed last night because of this test failure https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/ #2647 > [Flakey Tests] Disable TestClusterPortAssignment.testClusterPortAssignment as > it can't pass 100% of the time > > > Key: HBASE-24342 > URL: https://issues.apache.org/jira/browse/HBASE-24342 > Project: HBase > Issue Type: Bug > Components: flakies, test >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > This is a BindException special. We get randomFreePort and then put up the > procesess. > {code} > 2020-05-07 00:30:15,844 INFO [Time-limited test] http.HttpServer(1080): > HttpServer.start() threw a non Bind IOException > java.net.BindException: Port in use: 0.0.0.0:59568 > at > org.apache.hadoop.hbase.http.HttpServer.openListeners(HttpServer.java:1146) > at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:1077) > at org.apache.hadoop.hbase.http.InfoServer.start(InfoServer.java:148) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:2133) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:670) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:511) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:132) > at > org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:239) > at > org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:181) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:245) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:115) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1178) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1142) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1106) > at > org.apache.hadoop.hbase.TestClusterPortAssignment.testClusterPortAssignment(TestClusterPortAssignment.java:57) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at > org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:38) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282) > at
[GitHub] [hbase] ndimiduk opened a new pull request #1679: Backport "HBASE-24295 [Chaos Monkey] abstract logging through the class hierarchy ; ADDENDUM" to branch-2
ndimiduk opened a new pull request #1679: URL: https://github.com/apache/hbase/pull/1679 Signed-off-by: Jan Hentschel This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1678: Backport: HBASE-24328 skip duplicate GCMultipleMergedRegionsProcedure while pre…
Apache-HBase commented on pull request #1678: URL: https://github.com/apache/hbase/pull/1678#issuecomment-625476075 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 9s | branch-2.3 passed | | +1 :green_heart: | compile | 1m 1s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 5m 11s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 42s | hbase-server in branch-2.3 failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 50s | the patch passed | | +1 :green_heart: | compile | 1m 2s | the patch passed | | +1 :green_heart: | javac | 1m 2s | the patch passed | | +1 :green_heart: | shadedjars | 5m 12s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 39s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 120m 2s | hbase-server in the patch passed. | | | | 144m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1678/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1678 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 3058869576f9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 9dd8c9607c | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1678/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1678/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1678/1/testReport/ | | Max. process+thread count | 3766 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1678/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on pull request #1676: HBASE-24295 [Chaos Monkey] abstract logging through the class hierarchy ; ADDENDUM
ndimiduk commented on pull request #1676: URL: https://github.com/apache/hbase/pull/1676#issuecomment-625476182 Thanks @HorizonNet This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24342) [Flakey Tests] Disable TestClusterPortAssignment.testClusterPortAssignment as it can't pass 100% of the time
[ https://issues.apache.org/jira/browse/HBASE-24342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-24342: -- Summary: [Flakey Tests] Disable TestClusterPortAssignment.testClusterPortAssignment as it can't pass 100% of the time (was: [Flakey Tests] TestClusterPortAssignment.testClusterPortAssignment) > [Flakey Tests] Disable TestClusterPortAssignment.testClusterPortAssignment as > it can't pass 100% of the time > > > Key: HBASE-24342 > URL: https://issues.apache.org/jira/browse/HBASE-24342 > Project: HBase > Issue Type: Bug > Components: flakies, test >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > > This is a BindException special. We get randomFreePort and then put up the > procesess. > {code} > 2020-05-07 00:30:15,844 INFO [Time-limited test] http.HttpServer(1080): > HttpServer.start() threw a non Bind IOException > java.net.BindException: Port in use: 0.0.0.0:59568 > at > org.apache.hadoop.hbase.http.HttpServer.openListeners(HttpServer.java:1146) > at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:1077) > at org.apache.hadoop.hbase.http.InfoServer.start(InfoServer.java:148) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:2133) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:670) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:511) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:132) > at > org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:239) > at > org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:181) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:245) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:115) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1178) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1142) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1106) > at > org.apache.hadoop.hbase.TestClusterPortAssignment.testClusterPortAssignment(TestClusterPortAssignment.java:57) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at > org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:38) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.net.BindException:
[jira] [Created] (HBASE-24342) [Flakey Tests] TestClusterPortAssignment.testClusterPortAssignment
Michael Stack created HBASE-24342: - Summary: [Flakey Tests] TestClusterPortAssignment.testClusterPortAssignment Key: HBASE-24342 URL: https://issues.apache.org/jira/browse/HBASE-24342 Project: HBase Issue Type: Bug Components: flakies, test Reporter: Michael Stack Assignee: Michael Stack This is a BindException special. We get randomFreePort and then put up the procesess. {code} 2020-05-07 00:30:15,844 INFO [Time-limited test] http.HttpServer(1080): HttpServer.start() threw a non Bind IOException java.net.BindException: Port in use: 0.0.0.0:59568 at org.apache.hadoop.hbase.http.HttpServer.openListeners(HttpServer.java:1146) at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:1077) at org.apache.hadoop.hbase.http.InfoServer.start(InfoServer.java:148) at org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:2133) at org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:670) at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:511) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:132) at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:239) at org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:181) at org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:245) at org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:115) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1178) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1142) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1106) at org.apache.hadoop.hbase.TestClusterPortAssignment.testClusterPortAssignment(TestClusterPortAssignment.java:57) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:38) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:351) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:319) at
[GitHub] [hbase] Apache-HBase commented on pull request #1620: HBASE-23339 Release scripts should not need to write out a copy of gpg key material - WIP do not merge
Apache-HBase commented on pull request #1620: URL: https://github.com/apache/hbase/pull/1620#issuecomment-625465596 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1620: HBASE-23339 Release scripts should not need to write out a copy of gpg key material - WIP do not merge
Apache-HBase commented on pull request #1620: URL: https://github.com/apache/hbase/pull/1620#issuecomment-625465391 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 39s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 10s | Maven dependency ordering for patch | ||| _ Other Tests _ | | | | 2m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1620/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1620 | | Optional Tests | | | uname | Linux 2d63e98e7d1e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2cafe81e9c | | Max. process+thread count | 50 (vs. ulimit of 12500) | | modules | C: U: | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1620/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1675: HBASE-24341 The region should be removed from ConfigurationManager as…
Apache-HBase commented on pull request #1675: URL: https://github.com/apache/hbase/pull/1675#issuecomment-625452063 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 48s | master passed | | +1 :green_heart: | compile | 0m 56s | master passed | | +1 :green_heart: | shadedjars | 5m 12s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 29s | the patch passed | | +1 :green_heart: | compile | 0m 53s | the patch passed | | +1 :green_heart: | javac | 0m 53s | the patch passed | | +1 :green_heart: | shadedjars | 5m 18s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 35s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 150m 25s | hbase-server in the patch passed. | | | | 173m 58s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1675/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1675 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux dc9340cd7c49 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / f4a446c3d2 | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1675/1/testReport/ | | Max. process+thread count | 3853 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1675/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] busbey commented on pull request #1643: HBASE-24318 Create-release scripts fixes and enhancements
busbey commented on pull request #1643: URL: https://github.com/apache/hbase/pull/1643#issuecomment-625450946 As I said previously, if this is pressing for someone go ahead. I am already on a forked branch in order to fix problems with the release creation tooling as I find them getting towards RC0. As I become sure that the things I am fixing are actually fixed, I'll post them up for reviews. I'm trying to minimize repeated updates or addenda as I figure out that something was only fixed for a part of the RC generation process. I had wanted to incorporate this change on the branch I'm on (https://github.com/busbey/hbase/tree/HBASE-23339) so that I could ensure they work prior to HBase 3 alpha-1 rather then when I'm trying to maintain a regular cadence of alpha releases. If you are not willing to wait for that for some reason then by all means go ahead. I am likely to ignore them until after alpha-1 RCs are done in that case. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1675: HBASE-24341 The region should be removed from ConfigurationManager as…
Apache-HBase commented on pull request #1675: URL: https://github.com/apache/hbase/pull/1675#issuecomment-625449027 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 26s | master passed | | +1 :green_heart: | compile | 1m 9s | master passed | | +1 :green_heart: | shadedjars | 5m 33s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 40s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 7s | the patch passed | | +1 :green_heart: | compile | 1m 5s | the patch passed | | +1 :green_heart: | javac | 1m 5s | the patch passed | | +1 :green_heart: | shadedjars | 5m 53s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 51s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 140m 45s | hbase-server in the patch passed. | | | | 167m 1s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1675/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1675 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ff5c1befb903 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / f4a446c3d2 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1675/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1675/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1675/1/testReport/ | | Max. process+thread count | 3890 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1675/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1620: HBASE-23339 Release scripts should not need to write out a copy of gpg key material - WIP do not merge
Apache-HBase commented on pull request #1620: URL: https://github.com/apache/hbase/pull/1620#issuecomment-625442526 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 41s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | shelldocs | 0m 0s | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 37s | Maven dependency ordering for branch | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | -1 :x: | hadolint | 0m 3s | The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | | -0 :warning: | shellcheck | 0m 3s | The patch generated 12 new + 102 unchanged - 18 fixed = 114 total (was 120) | | -0 :warning: | whitespace | 0m 0s | The patch 5 line(s) with tabs. | ||| _ Other Tests _ | | +0 :ok: | asflicense | 0m 0s | ASF License check generated no output? | | | | 2m 52s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1620/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1620 | | Optional Tests | dupname asflicense shellcheck shelldocs hadolint | | uname | Linux 29f1e06866e7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2cafe81e9c | | hadolint | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1620/2/artifact/yetus-general-check/output/diff-patch-hadolint.txt | | shellcheck | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1620/2/artifact/yetus-general-check/output/diff-patch-shellcheck.txt | | whitespace | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1620/2/artifact/yetus-general-check/output/whitespace-tabs.txt | | Max. process+thread count | 52 (vs. ulimit of 12500) | | modules | C: U: | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1620/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) shellcheck=0.4.6 hadolint=1.17.5-0-g443423c | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1620: HBASE-23339 Release scripts should not need to write out a copy of gpg key material - WIP do not merge
Apache-HBase commented on pull request #1620: URL: https://github.com/apache/hbase/pull/1620#issuecomment-625442675 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1664: HBASE-24333 Backport HBASE-24304 "Separate a hbase-asyncfs module" to…
Apache-HBase commented on pull request #1664: URL: https://github.com/apache/hbase/pull/1664#issuecomment-625434336 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 8s | branch-2 passed | | +1 :green_heart: | checkstyle | 2m 15s | branch-2 passed | | +1 :green_heart: | spotbugs | 13m 34s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 12s | the patch passed | | +1 :green_heart: | checkstyle | 2m 13s | root: The patch generated 0 new + 368 unchanged - 4 fixed = 368 total (was 372) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 21s | The patch has no ill-formed XML file. | | +1 :green_heart: | hadoopcheck | 11m 37s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 15m 31s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 2m 18s | The patch does not generate ASF License warnings. | | | | 65m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1664 | | Optional Tests | dupname asflicense xml hadoopcheck spotbugs hbaseanti checkstyle | | uname | Linux e57ba5bb6fc4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 735aa8bf9f | | Max. process+thread count | 137 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server hbase-mapreduce hbase-testing-util hbase-thrift hbase-endpoint hbase-rest hbase-examples hbase-assembly hbase-shaded/hbase-shaded-testing-util . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1664/6/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1584: HBASE-24256 When fixOverlap hits the max region limit, it is possible…
Apache-HBase commented on pull request #1584: URL: https://github.com/apache/hbase/pull/1584#issuecomment-625428940 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 3s | master passed | | +1 :green_heart: | checkstyle | 1m 13s | master passed | | +1 :green_heart: | spotbugs | 2m 7s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 42s | the patch passed | | +1 :green_heart: | checkstyle | 1m 10s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 12m 10s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 14s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 34m 52s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/5/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1584 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux b6b08185c982 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2cafe81e9c | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1584/5/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1678: Backport: HBASE-24328 skip duplicate GCMultipleMergedRegionsProcedure while pre…
Apache-HBase commented on pull request #1678: URL: https://github.com/apache/hbase/pull/1678#issuecomment-625428474 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 44s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 44s | branch-2.3 passed | | +1 :green_heart: | checkstyle | 1m 9s | branch-2.3 passed | | +1 :green_heart: | spotbugs | 2m 4s | branch-2.3 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 15s | the patch passed | | +1 :green_heart: | checkstyle | 1m 2s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 20m 13s | Patch does not cause any errors with Hadoop 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 52s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 14s | The patch does not generate ASF License warnings. | | | | 44m 14s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1678/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1678 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 6562d2ea039f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 9dd8c9607c | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1678/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun opened a new pull request #1678: Backport: HBASE-24328 skip duplicate GCMultipleMergedRegionsProcedure while pre…
huaxiangsun opened a new pull request #1678: URL: https://github.com/apache/hbase/pull/1678 …vious finished (#1672) Signed-off-by: Huaxiang Sun This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1677: HBASE-24313 [DOCS] Document ignoreTimestamps option added to HashTabl…
Apache-HBase commented on pull request #1677: URL: https://github.com/apache/hbase/pull/1677#issuecomment-625396151 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 47s | master passed | | +0 :ok: | refguide | 4m 51s | branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +0 :ok: | refguide | 4m 53s | patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 17s | The patch does not generate ASF License warnings. | | | | 19m 18s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1677/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1677 | | Optional Tests | dupname asflicense refguide | | uname | Linux 5db9e02a1b6f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / f4a446c3d2 | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1677/1/artifact/yetus-general-check/output/branch-site/book.html | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1677/1/artifact/yetus-general-check/output/patch-site/book.html | | Max. process+thread count | 78 (vs. ulimit of 12500) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1677/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-24335) Support deleteall with ts but without column in shell mode
[ https://issues.apache.org/jira/browse/HBASE-24335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil resolved HBASE-24335. -- Resolution: Fixed Thanks for the contribution, [~filtertip]! Had pushed it to master, branch-2 and branch-2.3 branches. > Support deleteall with ts but without column in shell mode > -- > > Key: HBASE-24335 > URL: https://issues.apache.org/jira/browse/HBASE-24335 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0 > > > The position after rowkey is column, so if we can't only specify ts now. > My proposal is use a empty string to represent no column specified. > useage: > deleteall 'test','r1','',158876590 > deleteall 'test', \{ROWPREFIXFILTER => 'prefix'}, '', 158876590 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24335) Support deleteall with ts but without column in shell mode
[ https://issues.apache.org/jira/browse/HBASE-24335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-24335: - Fix Version/s: 2.4.0 2.3.0 > Support deleteall with ts but without column in shell mode > -- > > Key: HBASE-24335 > URL: https://issues.apache.org/jira/browse/HBASE-24335 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0 > > > The position after rowkey is column, so if we can't only specify ts now. > My proposal is use a empty string to represent no column specified. > useage: > deleteall 'test','r1','',158876590 > deleteall 'test', \{ROWPREFIXFILTER => 'prefix'}, '', 158876590 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1676: HBASE-24295 [Chaos Monkey] abstract logging through the class hierarchy ; ADDENDUM
Apache-HBase commented on pull request #1676: URL: https://github.com/apache/hbase/pull/1676#issuecomment-625391128 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 10s | master passed | | +1 :green_heart: | checkstyle | 0m 17s | master passed | | +1 :green_heart: | spotbugs | 0m 0s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 46s | the patch passed | | +1 :green_heart: | checkstyle | 0m 14s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 12m 27s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 0m 0s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 29m 49s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1676/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1676 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux a72957b4cc54 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / f4a446c3d2 | | Max. process+thread count | 66 (vs. ulimit of 12500) | | modules | C: hbase-it U: hbase-it | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1676/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1676: HBASE-24295 [Chaos Monkey] abstract logging through the class hierarchy ; ADDENDUM
Apache-HBase commented on pull request #1676: URL: https://github.com/apache/hbase/pull/1676#issuecomment-625390109 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 28s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 24s | master passed | | +1 :green_heart: | compile | 0m 33s | master passed | | +1 :green_heart: | shadedjars | 6m 40s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 15s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 43s | the patch passed | | +1 :green_heart: | compile | 0m 31s | the patch passed | | +1 :green_heart: | javac | 0m 31s | the patch passed | | +1 :green_heart: | shadedjars | 6m 3s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 13s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 52s | hbase-it in the patch passed. | | | | 27m 49s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1676/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1676 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 5688709556ed 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / f4a446c3d2 | | Default Java | 2020-01-14 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1676/1/testReport/ | | Max. process+thread count | 662 (vs. ulimit of 12500) | | modules | C: hbase-it U: hbase-it | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1676/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24335) Support deleteall with ts but without column in shell mode
[ https://issues.apache.org/jira/browse/HBASE-24335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-24335: - Affects Version/s: 3.0.0-alpha-1 > Support deleteall with ts but without column in shell mode > -- > > Key: HBASE-24335 > URL: https://issues.apache.org/jira/browse/HBASE-24335 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The position after rowkey is column, so if we can't only specify ts now. > My proposal is use a empty string to represent no column specified. > useage: > deleteall 'test','r1','',158876590 > deleteall 'test', \{ROWPREFIXFILTER => 'prefix'}, '', 158876590 -- This message was sent by Atlassian Jira (v8.3.4#803005)