[GitHub] [hbase] Apache-HBase commented on pull request #2625: HBASE-25238 Upgrading HBase from 2.2.0 to 2.3.x fails because of “Mes…
Apache-HBase commented on pull request #2625: URL: https://github.com/apache/hbase/pull/2625#issuecomment-722189339 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 38s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 1s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 13s | branch-2 passed | | +1 :green_heart: | checkstyle | 2m 34s | branch-2 passed | | +1 :green_heart: | spotbugs | 6m 55s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 47s | the patch passed | | +1 :green_heart: | checkstyle | 2m 13s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 14m 9s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | hbaseprotoc | 2m 1s | the patch passed | | +1 :green_heart: | spotbugs | 7m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 35s | The patch does not generate ASF License warnings. | | | | 54m 42s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2625/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2625 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle cc hbaseprotoc prototool | | uname | Linux 542adf2f332e 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 3d8152b635 | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2625/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25238) Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”
[ https://issues.apache.org/jira/browse/HBASE-25238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226508#comment-17226508 ] Michael Stack commented on HBASE-25238: --- Added suggested PR. Manually testing of upgrade is taking a bit of time... > Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing > required fields: state” > - > > Key: HBASE-25238 > URL: https://issues.apache.org/jira/browse/HBASE-25238 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Zhuqi Jin >Priority: Critical > > When we upgraded HBASE cluster from 2.0.0-RC0 to 2.3.0 or 2.3.3, the HMaster > on upgraded node failed to start. > The error message is shown below: > {code:java} > 2020-11-02 23:04:01,998 ERROR [master/2c4006997f99:16000:becomeActiveMaster] > master.HMaster: Failed to become active > masterorg.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) > at > org.apache.hbase.thirdparty.com.google.protobuf.Any.unpack(Any.java:228) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil$StateSerializer.deserialize(ProcedureUtil.java:124) > at > org.apache.hadoop.hbase.master.assignment.RegionRemoteProcedureBase.deserializeStateData(RegionRemoteProcedureBase.java:352) > at > org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure.deserializeStateData(OpenRegionProcedure.java:72) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:294) > at > org.apache.hadoop.hbase.procedure2.store.ProtoAndProcedure.getProcedure(ProtoAndProcedure.java:43) > at > org.apache.hadoop.hbase.procedure2.store.InMemoryProcedureIterator.next(InMemoryProcedureIterator.java:90) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore$1.load(RegionProcedureStore.java:194) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$2.load(WALProcedureStore.java:474) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:151) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:103) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:465) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.tryMigrate(RegionProcedureStore.java:184) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:257) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587) > at > org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1572) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:950) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240) > at > org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622) > at java.lang.Thread.run(Thread.java:748)2020-11-02 23:04:01,998 ERROR > [master/2c4006997f99:16000:becomeActiveMaster] master.HMaster: * ABORTING > master 2c4006997f99,16000,1604358237412: Unhandled exception. Starting > shutdown. > *org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125) > at
[GitHub] [hbase] saintstack opened a new pull request #2625: HBASE-25238 Upgrading HBase from 2.2.0 to 2.3.x fails because of “Mes…
saintstack opened a new pull request #2625: URL: https://github.com/apache/hbase/pull/2625 …sage missing required fields: state” Make protobuf fields add post-2.0.0 release marked 'required' instead be 'optional' so migrations from 2.0.x to 2.1+ or 2.2+ succeeds. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #2584: HBASE-25126 Add load balance logic in hbase-client to distribute read…
Apache9 commented on pull request #2584: URL: https://github.com/apache/hbase/pull/2584#issuecomment-722153134 > > Landing this on master is proposed by me as this PR is not related to the server side changes. It can be used in our current code base, without the changes in HBASE-18070. And what's more, the client side code are different between master and branch-2, as on master, we rebuild the sync client on top of async client which makes it much easier to implement this issue but on branch-2, you need to deal with sync client separately. So I suggest we land this on master, and then start backporting to branch-2 ASAP. > > As I understand, HBASE-18070 is branched from master. As we are merging HBASE-18070 back to master, it would be better to merge them as a whole. Unitests are different w/o meta replication source changes in HBASE-18070 as there is no realtime replication of meta wal edits. I simulated that by "flush" and "refresh" hfiles for meta. > Would do you think? I‘m fine with committing to HBASE-18070 but it will delay the whole process I suppose. If you commit this to master, while backporting to branch-2, you just need to backport this commit only. If you commit to HBASE-18070, while backporting you need to deal with all the commits. And you can change the UT on the feature branch when meta replication is available. In general, I just want to help you guys land the feature faster. I do not get the point why you do not want to commit to master, usually developpers will do the opposite... Anyway, I've approved the PR. You are free to commit to master or feature branch as you like. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25238) Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”
[ https://issues.apache.org/jira/browse/HBASE-25238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226485#comment-17226485 ] Anoop Sam John commented on HBASE-25238: Actually upgrade from 2.0.x or 2.1.x to 2.2.0+ versions will have this issue. Here the test was from 2.2.0 RC0 right? In 2.2.0 release itself this breaking change went in. Can change the jira title and desc? > Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing > required fields: state” > - > > Key: HBASE-25238 > URL: https://issues.apache.org/jira/browse/HBASE-25238 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Zhuqi Jin >Priority: Critical > > When we upgraded HBASE cluster from 2.0.0-RC0 to 2.3.0 or 2.3.3, the HMaster > on upgraded node failed to start. > The error message is shown below: > {code:java} > 2020-11-02 23:04:01,998 ERROR [master/2c4006997f99:16000:becomeActiveMaster] > master.HMaster: Failed to become active > masterorg.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) > at > org.apache.hbase.thirdparty.com.google.protobuf.Any.unpack(Any.java:228) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil$StateSerializer.deserialize(ProcedureUtil.java:124) > at > org.apache.hadoop.hbase.master.assignment.RegionRemoteProcedureBase.deserializeStateData(RegionRemoteProcedureBase.java:352) > at > org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure.deserializeStateData(OpenRegionProcedure.java:72) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:294) > at > org.apache.hadoop.hbase.procedure2.store.ProtoAndProcedure.getProcedure(ProtoAndProcedure.java:43) > at > org.apache.hadoop.hbase.procedure2.store.InMemoryProcedureIterator.next(InMemoryProcedureIterator.java:90) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore$1.load(RegionProcedureStore.java:194) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$2.load(WALProcedureStore.java:474) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:151) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:103) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:465) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.tryMigrate(RegionProcedureStore.java:184) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:257) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587) > at > org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1572) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:950) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240) > at > org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622) > at java.lang.Thread.run(Thread.java:748)2020-11-02 23:04:01,998 ERROR > [master/2c4006997f99:16000:becomeActiveMaster] master.HMaster: * ABORTING > master 2c4006997f99,16000,1604358237412: Unhandled exception. Starting > shutdown. > *org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.ja
[jira] [Updated] (HBASE-25229) Instantiate BucketCache before RS creates a their ephemeral node when rolling-upgrade
[ https://issues.apache.org/jira/browse/HBASE-25229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeongdae Kim updated HBASE-25229: - Affects Version/s: 1.7.0 1.5.0 1.6.0 > Instantiate BucketCache before RS creates a their ephemeral node when > rolling-upgrade > - > > Key: HBASE-25229 > URL: https://issues.apache.org/jira/browse/HBASE-25229 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.5.0, 1.6.0, 1.7.0, 1.4.13 >Reporter: Jeongdae Kim >Assignee: Jeongdae Kim >Priority: Minor > > We observed many clients couldn't get information on region locations for > tens of seconds during rolling-upgrade from 1.2.x to 1.4.x, and all requests > to regions moved by graceful restart failed. > > The reason is that > # Since HBASE-17931, system tables are assigned to RS with highest version > # Since HBASE-12034, bucket cache initialization process has moved from RS > instantiation to RS initialization process after reporting to master, > moreover an ephemeral node for RS is created before bucket cache creation. > # when using offheap bucketcache, it takes too much time to allocate memory > for it (18 seconds for 31GB in our case) > [https://github.com/apache/hbase/blob/branch-1.4/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferArray.java#L52-L72] > # Once ephemeral nodes created, a master try to move system regions to RS > with highest version when first RS restart of whole rolling-restart process. > but, by 3) the RS is not ready for serving system regions yet. moving system > regions keep failing until 3) is finished. > > I think this could happen only in branch-1, because an ephemeral node is > created after creating block caches in hbase 2.x. there is no need to create > block caches after ephemeral node creation at all. > > I verified this issue could be resolved by just changing their creation order. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-17910) Use separated StoreFileReader for streaming read
[ https://issues.apache.org/jira/browse/HBASE-17910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226438#comment-17226438 ] Karthik Palanisamy edited comment on HBASE-17910 at 11/5/20, 1:17 AM: -- [~anoop.hbase] [~zhangduo] [~busbey] [~elserj] Recently, one of our user reported high CPU usage in namenode. On our troubleshooting, found millions of OPEN and GetFileInfo calls continuously to namenode, is because of readType STREAM which creates multiple scanners. I understand we switch readType to STREAM automatically but I don't find any flag to disable STREAM. I am curious if that is the expected design? The switch happens below, If the scan become get. if the scan with startrow and stoprow. if the scan keeps running for long time. I.e kv bytesRead > preadMaxBytes. (Default preadMaxBytes is 4*blockSize, which is 4*64KB). Maybe this spike could be at every cluster but the user might not be noticed yet. At this moment, I am trying to work around with "hbase.storescanner.pread.max.bytes" and "hbase.cells.scanned.per.heartbeat.check". Will post more updates next week with the root cause. {code:java} ... private StoreScanner(HStore store, Scan scan, ScanInfo scanInfo, int numColumns, long readPt, boolean cacheBlocks, ScanType scanType) { .. get = scan.isGetScan(); .. this.maxRowSize = scanInfo.getTableMaxRowSize(); if (get) { this.readType = Scan.ReadType.PREAD; this.scanUsePread = true; } ... public void shipped() throws IOException { .. clearAndClose(scannersForDelayedClose); if (this.heap != null) { .. trySwitchToStreamRead(); } } .. void trySwitchToStreamRead() { if (readType != Scan.ReadType.DEFAULT || !scanUsePread || closing || heap.peek() == null || bytesRead < preadMaxBytes) { return; } LOG.debug("Switch to stream read (scanned={} bytes) of {}", bytesRead, this.store.getColumnFamilyName()); .. } {code} was (Author: kpalanisamy): [~anoop.hbase] [~zhangduo] [~busbey] [~elserj] Recently, one of our user reported high CPU usage in namenode. On our troubleshooting, found millions of OPEN and GetFileInfo calls continuously to namenode, is because of readType STREAM which creates multiple scanners. I understand we switch readType to STREAM automatically but I don't find any flag to disable STREAM. I am curious if that is the expected design? The switch happens below, If the scan become get. if the scan with startrow and stoprow. if the scan keeps running for long time. I.e kv bytesRead > preadMaxBytes. (Default preadMaxBytes is 4*blockSize, which is 4*64KB). Maybe this spike could be at every cluster but the user might not be noticed yet. At this moment, I am trying to work around with "hbase.storescanner.pread.max.bytes" and "hbase.cells.scanned.per.heartbeat.check". Will post more updates next week with the root cause. {code:java} this(family, minVersions, maxVersions, ttl, keepDeletedCells, timeToPurgeDeletes, comparator, conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, HConstants.TABLE_MAX_ROWSIZE_DEFAULT), conf.getBoolean("hbase.storescanner.use.pread", false), getCellsPerTimeoutCheck(conf), conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false), conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), newVersionBehavior); ... private StoreScanner(HStore store, Scan scan, ScanInfo scanInfo, int numColumns, long readPt, boolean cacheBlocks, ScanType scanType) { .. get = scan.isGetScan(); .. this.maxRowSize = scanInfo.getTableMaxRowSize(); if (get) { this.readType = Scan.ReadType.PREAD; this.scanUsePread = true; } ...{code} > Use separated StoreFileReader for streaming read > > > Key: HBASE-17910 > URL: https://issues.apache.org/jira/browse/HBASE-17910 > Project: HBase > Issue Type: Improvement > Components: regionserver, Scanners >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.0.0 > > > For now we have already supportted using private readers for compaction, by > creating a new StoreFile copy. I think a better way is to allow creating > multiple readers from a single StoreFile instance, thus we can avoid the ugly > cloning, and the reader can also be used for streaming scan, not only for > compaction. > The reason we want to do this is that, we found a read amplification when > using short circult read. {{BlockReaderLocal}} will use an internal buffer to > read data first, the buffer size is based on the configured buffer size and > the readahead option in CachingStrategy. For normal pread request, we should > just bypass the buffer, this can be achieved by setting readahead to 0. But > for str
[jira] [Comment Edited] (HBASE-17910) Use separated StoreFileReader for streaming read
[ https://issues.apache.org/jira/browse/HBASE-17910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226438#comment-17226438 ] Karthik Palanisamy edited comment on HBASE-17910 at 11/5/20, 1:15 AM: -- [~anoop.hbase] [~zhangduo] [~busbey] [~elserj] Recently, one of our user reported high CPU usage in namenode. On our troubleshooting, found millions of OPEN and GetFileInfo calls continuously to namenode, is because of readType STREAM which creates multiple scanners. I understand we switch readType to STREAM automatically but I don't find any flag to disable STREAM. I am curious if that is the expected design? The switch happens below, If the scan become get. if the scan with startrow and stoprow. if the scan keeps running for long time. I.e kv bytesRead > preadMaxBytes. (Default preadMaxBytes is 4*blockSize, which is 4*64KB). Maybe this spike could be at every cluster but the user might not be noticed yet. At this moment, I am trying to work around with "hbase.storescanner.pread.max.bytes" and "hbase.cells.scanned.per.heartbeat.check". Will post more updates next week with the root cause. {code:java} this(family, minVersions, maxVersions, ttl, keepDeletedCells, timeToPurgeDeletes, comparator, conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, HConstants.TABLE_MAX_ROWSIZE_DEFAULT), conf.getBoolean("hbase.storescanner.use.pread", false), getCellsPerTimeoutCheck(conf), conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false), conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), newVersionBehavior); ... private StoreScanner(HStore store, Scan scan, ScanInfo scanInfo, int numColumns, long readPt, boolean cacheBlocks, ScanType scanType) { .. get = scan.isGetScan(); .. this.maxRowSize = scanInfo.getTableMaxRowSize(); if (get) { this.readType = Scan.ReadType.PREAD; this.scanUsePread = true; } ...{code} was (Author: kpalanisamy): [~anoop.hbase] [~zhangduo] [~busbey] [~elserj] Recently, one of our user reported high CPU usage in namenode. On our troubleshooting, found millions of OPEN and GetFileInfo calls continuously to namenode, is because of readType STREAM which creates multiple scanners. I understand we switch readType to STREAM automatically but I don't find any flag to disable STREAM. I am curious if that is the expected design? The switch happens below, If the scan become get. if the scan with startrow and stoprow. if the scan keeps running for long time. I.e kv bytesRead > preadMaxBytes. (Default preadMaxBytes is 4*blockSize, which is 4*64KB). Maybe this spike could be at every cluster but the user might not be noticed yet. At this moment, I am trying to work around with "hbase.storescanner.pread.max.bytes" and "hbase.cells.scanned.per.heartbeat.check". Will post more updates next week with the root cause. {code:java} this(family, minVersions, maxVersions, ttl, keepDeletedCells, timeToPurgeDeletes, comparator, conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, HConstants.TABLE_MAX_ROWSIZE_DEFAULT), conf.getBoolean("hbase.storescanner.use.pread", false), getCellsPerTimeoutCheck(conf), conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false), conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), newVersionBehavior);{code} > Use separated StoreFileReader for streaming read > > > Key: HBASE-17910 > URL: https://issues.apache.org/jira/browse/HBASE-17910 > Project: HBase > Issue Type: Improvement > Components: regionserver, Scanners >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.0.0 > > > For now we have already supportted using private readers for compaction, by > creating a new StoreFile copy. I think a better way is to allow creating > multiple readers from a single StoreFile instance, thus we can avoid the ugly > cloning, and the reader can also be used for streaming scan, not only for > compaction. > The reason we want to do this is that, we found a read amplification when > using short circult read. {{BlockReaderLocal}} will use an internal buffer to > read data first, the buffer size is based on the configured buffer size and > the readahead option in CachingStrategy. For normal pread request, we should > just bypass the buffer, this can be achieved by setting readahead to 0. But > for streaming read I think the buffer is somehow still useful? So we need to > use different FSDataInputStream for pread and streaming read. > And one more thing is that, we can also remove the streamLock if streaming > read always use its own reader. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-17910) Use separated StoreFileReader for streaming read
[ https://issues.apache.org/jira/browse/HBASE-17910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226438#comment-17226438 ] Karthik Palanisamy edited comment on HBASE-17910 at 11/5/20, 1:11 AM: -- [~anoop.hbase] [~zhangduo] [~busbey] [~elserj] Recently, one of our user reported high CPU usage in namenode. On our troubleshooting, found millions of OPEN and GetFileInfo calls continuously to namenode, is because of readType STREAM which creates multiple scanners. I understand we switch readType to STREAM automatically but I don't find any flag to disable STREAM. I am curious if that is the expected design? The switch happens below, If the scan become get. if the scan with startrow and stoprow. if the scan keeps running for long time. I.e kv bytesRead > preadMaxBytes. (Default preadMaxBytes is 4*blockSize, which is 4*64KB). Maybe this spike could be at every cluster but the user might not be noticed yet. At this moment, I am trying to work around with "hbase.storescanner.pread.max.bytes" and "hbase.cells.scanned.per.heartbeat.check". Will post more updates next week with the root cause. {code:java} this(family, minVersions, maxVersions, ttl, keepDeletedCells, timeToPurgeDeletes, comparator, conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, HConstants.TABLE_MAX_ROWSIZE_DEFAULT), conf.getBoolean("hbase.storescanner.use.pread", false), getCellsPerTimeoutCheck(conf), conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false), conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), newVersionBehavior);{code} was (Author: kpalanisamy): [~anoop.hbase] [~zhangduo] [~busbey] [~elserj] Recently, one of our user reported high CPU usage in namenode. On our troubleshooting, found millions of OPEN and GetFileInfo calls continuously to namenode, is because of readType STREAM which creates multiple scanners. I understand we switch readType to STREAM automatically but I don't find any flag to disable STREAM. I am curious if that is the expected design? If the scan become get. if the scan with startrow and stoprow. if the scan keeps running for long time. I.e kv bytesRead > preadMaxBytes. (Default preadMaxBytes is 4*blockSize, which is 4*64KB). Maybe this spike could be at every cluster but the user might not be noticed yet. At this moment, I am trying to work around with "hbase.storescanner.pread.max.bytes" and "hbase.cells.scanned.per.heartbeat.check". Will post more updates next week with the root cause. {code:java} this(family, minVersions, maxVersions, ttl, keepDeletedCells, timeToPurgeDeletes, comparator, conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, HConstants.TABLE_MAX_ROWSIZE_DEFAULT), conf.getBoolean("hbase.storescanner.use.pread", false), getCellsPerTimeoutCheck(conf), conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false), conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), newVersionBehavior);{code} > Use separated StoreFileReader for streaming read > > > Key: HBASE-17910 > URL: https://issues.apache.org/jira/browse/HBASE-17910 > Project: HBase > Issue Type: Improvement > Components: regionserver, Scanners >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.0.0 > > > For now we have already supportted using private readers for compaction, by > creating a new StoreFile copy. I think a better way is to allow creating > multiple readers from a single StoreFile instance, thus we can avoid the ugly > cloning, and the reader can also be used for streaming scan, not only for > compaction. > The reason we want to do this is that, we found a read amplification when > using short circult read. {{BlockReaderLocal}} will use an internal buffer to > read data first, the buffer size is based on the configured buffer size and > the readahead option in CachingStrategy. For normal pread request, we should > just bypass the buffer, this can be achieved by setting readahead to 0. But > for streaming read I think the buffer is somehow still useful? So we need to > use different FSDataInputStream for pread and streaming read. > And one more thing is that, we can also remove the streamLock if streaming > read always use its own reader. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-17910) Use separated StoreFileReader for streaming read
[ https://issues.apache.org/jira/browse/HBASE-17910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226438#comment-17226438 ] Karthik Palanisamy commented on HBASE-17910: [~anoop.hbase] [~zhangduo] [~busbey] [~elserj] Recently, one of our user reported high CPU usage in namenode. On our troubleshooting, found millions of OPEN and GetFileInfo calls continuously to namenode, is because of readType STREAM which creates multiple scanners. I understand we switch readType to STREAM automatically but I don't find any flag to disable STREAM. I am curious if that is the expected design? If the scan become get. if the scan with startrow and stoprow. if the scan keeps running for long time. I.e kv bytesRead > preadMaxBytes. (Default preadMaxBytes is 4*blockSize, which is 4*64KB). Maybe this spike could be at every cluster but the user might not be noticed yet. At this moment, I am trying to work around with "hbase.storescanner.pread.max.bytes" and "hbase.cells.scanned.per.heartbeat.check". Will post more updates next week with the root cause. {code:java} this(family, minVersions, maxVersions, ttl, keepDeletedCells, timeToPurgeDeletes, comparator, conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, HConstants.TABLE_MAX_ROWSIZE_DEFAULT), conf.getBoolean("hbase.storescanner.use.pread", false), getCellsPerTimeoutCheck(conf), conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false), conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), newVersionBehavior);{code} > Use separated StoreFileReader for streaming read > > > Key: HBASE-17910 > URL: https://issues.apache.org/jira/browse/HBASE-17910 > Project: HBase > Issue Type: Improvement > Components: regionserver, Scanners >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.0.0 > > > For now we have already supportted using private readers for compaction, by > creating a new StoreFile copy. I think a better way is to allow creating > multiple readers from a single StoreFile instance, thus we can avoid the ugly > cloning, and the reader can also be used for streaming scan, not only for > compaction. > The reason we want to do this is that, we found a read amplification when > using short circult read. {{BlockReaderLocal}} will use an internal buffer to > read data first, the buffer size is based on the configured buffer size and > the readahead option in CachingStrategy. For normal pread request, we should > just bypass the buffer, this can be achieved by setting readahead to 0. But > for streaming read I think the buffer is somehow still useful? So we need to > use different FSDataInputStream for pread and streaming read. > And one more thing is that, we can also remove the streamLock if streaming > read always use its own reader. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2584: HBASE-25126 Add load balance logic in hbase-client to distribute read…
Apache-HBase commented on pull request #2584: URL: https://github.com/apache/hbase/pull/2584#issuecomment-722031640 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 12s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 51s | master passed | | +1 :green_heart: | compile | 1m 25s | master passed | | +1 :green_heart: | shadedjars | 7m 10s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 58s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 48s | the patch passed | | +1 :green_heart: | compile | 1m 23s | the patch passed | | +1 :green_heart: | javac | 1m 23s | the patch passed | | +1 :green_heart: | shadedjars | 7m 15s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 57s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 12s | hbase-client in the patch passed. | | -1 :x: | unit | 215m 33s | hbase-server in the patch failed. | | | | 247m 24s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2584/14/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2584 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c4bdb70ebdd9 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 0e71d6192a | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2584/14/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2584/14/testReport/ | | Max. process+thread count | 2990 (vs. ulimit of 3) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2584/14/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-25238) Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”
[ https://issues.apache.org/jira/browse/HBASE-25238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226288#comment-17226288 ] Michael Stack edited comment on HBASE-25238 at 11/4/20, 10:53 PM: -- Marking this issue critical. Can change the proto fields to be optional so upgrades work. Let me make a patch. Thanks for linking HBASE-25234 [~pankajkumar] . Let me fix that too. was (Author: stack): Marking this issue critical. Can change the proto fields to be optional so upgrades. Let me make a patch. Thanks for linking HBASE-25234 [~pankajkumar] . Let me fix that too. > Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing > required fields: state” > - > > Key: HBASE-25238 > URL: https://issues.apache.org/jira/browse/HBASE-25238 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Zhuqi Jin >Priority: Critical > > When we upgraded HBASE cluster from 2.0.0-RC0 to 2.3.0 or 2.3.3, the HMaster > on upgraded node failed to start. > The error message is shown below: > {code:java} > 2020-11-02 23:04:01,998 ERROR [master/2c4006997f99:16000:becomeActiveMaster] > master.HMaster: Failed to become active > masterorg.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) > at > org.apache.hbase.thirdparty.com.google.protobuf.Any.unpack(Any.java:228) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil$StateSerializer.deserialize(ProcedureUtil.java:124) > at > org.apache.hadoop.hbase.master.assignment.RegionRemoteProcedureBase.deserializeStateData(RegionRemoteProcedureBase.java:352) > at > org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure.deserializeStateData(OpenRegionProcedure.java:72) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:294) > at > org.apache.hadoop.hbase.procedure2.store.ProtoAndProcedure.getProcedure(ProtoAndProcedure.java:43) > at > org.apache.hadoop.hbase.procedure2.store.InMemoryProcedureIterator.next(InMemoryProcedureIterator.java:90) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore$1.load(RegionProcedureStore.java:194) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$2.load(WALProcedureStore.java:474) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:151) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:103) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:465) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.tryMigrate(RegionProcedureStore.java:184) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:257) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587) > at > org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1572) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:950) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240) > at > org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622) > at java.lang.Thread.run(Thread.java:748)2020-11-02 23:04:01,998 ERROR > [master/2c4006997f99:16000:becomeActiveMaster] master.HMaster: * ABORTING > master 2c4006997f99,16000,1604358237412: Unhandled exception. Starting > shutdown. > *org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) >
[GitHub] [hbase] Apache-HBase commented on pull request #2584: HBASE-25126 Add load balance logic in hbase-client to distribute read…
Apache-HBase commented on pull request #2584: URL: https://github.com/apache/hbase/pull/2584#issuecomment-722005281 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 3s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 57s | master passed | | +1 :green_heart: | compile | 1m 32s | master passed | | +1 :green_heart: | shadedjars | 6m 35s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 5s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 1s | the patch passed | | +1 :green_heart: | compile | 1m 33s | the patch passed | | +1 :green_heart: | javac | 1m 33s | the patch passed | | +1 :green_heart: | shadedjars | 6m 40s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 5s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 4s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 138m 52s | hbase-server in the patch passed. | | | | 170m 44s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2584/14/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2584 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 426d754f1244 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 0e71d6192a | | Default Java | AdoptOpenJDK-11.0.6+10 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2584/14/testReport/ | | Max. process+thread count | 3834 (vs. ulimit of 3) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2584/14/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2584: HBASE-25126 Add load balance logic in hbase-client to distribute read…
Apache-HBase commented on pull request #2584: URL: https://github.com/apache/hbase/pull/2584#issuecomment-721950562 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 7s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 52s | master passed | | +1 :green_heart: | checkstyle | 1m 37s | master passed | | +1 :green_heart: | spotbugs | 3m 6s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 45s | the patch passed | | -0 :warning: | checkstyle | 0m 28s | hbase-client: The patch generated 4 new + 2 unchanged - 0 fixed = 6 total (was 2) | | -0 :warning: | checkstyle | 1m 10s | hbase-server: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 50s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 3m 26s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 22s | The patch does not generate ASF License warnings. | | | | 46m 8s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2584/14/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2584 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux bd32192c7683 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 0e71d6192a | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2584/14/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2584/14/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 84 (vs. ulimit of 3) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2584/14/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25246) Backup/Restore hbase cell tags.
[ https://issues.apache.org/jira/browse/HBASE-25246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226365#comment-17226365 ] Andrew Kyle Purtell commented on HBASE-25246: - Cell tags are persisted in store files. A snapshot based backup/restore foundation will capture tags. To the extent that meets your needs, at least you have that. Agreed, to extract tags along with the rest of the cell data into some other format requires a tooling improvement. Makes sense. > Backup/Restore hbase cell tags. > --- > > Key: HBASE-25246 > URL: https://issues.apache.org/jira/browse/HBASE-25246 > Project: HBase > Issue Type: Improvement > Components: backup&restore >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > In PHOENIX-6213 we are planning to add cell tags for Delete mutations. After > having a discussion with hbase community via dev mailing thread, it was > decided that we will pass the tags via an attribute in Mutation object and > persist them to hbase via phoenix co-processor. The intention of PHOENIX-6213 > is to store metadata in Delete marker so that while running Restore tool we > can selectively restore certain Delete markers and ignore others. For that to > happen we need to persist these tags in Backup and retrieve them in Restore > MR jobs (Import/Export tool). > Currently we don't persist the tags in Backup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-25246) Backup/Restore hbase cell tags.
[ https://issues.apache.org/jira/browse/HBASE-25246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226365#comment-17226365 ] Andrew Kyle Purtell edited comment on HBASE-25246 at 11/4/20, 7:47 PM: --- Cell tags are persisted in store files. A snapshot based backup/restore foundation will capture tags because they preserve the store files containing them. To the extent that meets your needs, at least you have that. Agreed, to extract tags along with the rest of the cell data into some other format requires a tooling improvement. Makes sense. was (Author: apurtell): Cell tags are persisted in store files. A snapshot based backup/restore foundation will capture tags. To the extent that meets your needs, at least you have that. Agreed, to extract tags along with the rest of the cell data into some other format requires a tooling improvement. Makes sense. > Backup/Restore hbase cell tags. > --- > > Key: HBASE-25246 > URL: https://issues.apache.org/jira/browse/HBASE-25246 > Project: HBase > Issue Type: Improvement > Components: backup&restore >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > In PHOENIX-6213 we are planning to add cell tags for Delete mutations. After > having a discussion with hbase community via dev mailing thread, it was > decided that we will pass the tags via an attribute in Mutation object and > persist them to hbase via phoenix co-processor. The intention of PHOENIX-6213 > is to store metadata in Delete marker so that while running Restore tool we > can selectively restore certain Delete markers and ignore others. For that to > happen we need to persist these tags in Backup and retrieve them in Restore > MR jobs (Import/Export tool). > Currently we don't persist the tags in Backup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] huaxiangsun commented on pull request #2584: HBASE-25126 Add load balance logic in hbase-client to distribute read…
huaxiangsun commented on pull request #2584: URL: https://github.com/apache/hbase/pull/2584#issuecomment-721930030 Update addressing Duo and Stack's comments. If there is anything I need to take care of, please let me know. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun edited a comment on pull request #2584: HBASE-25126 Add load balance logic in hbase-client to distribute read…
huaxiangsun edited a comment on pull request #2584: URL: https://github.com/apache/hbase/pull/2584#issuecomment-721889440 > Landing this on master is proposed by me as this PR is not related to the server side changes. It can be used in our current code base, without the changes in HBASE-18070. And what's more, the client side code are different between master and branch-2, as on master, we rebuild the sync client on top of async client which makes it much easier to implement this issue but on branch-2, you need to deal with sync client separately. So I suggest we land this on master, and then start backporting to branch-2 ASAP. As I understand, HBASE-18070 is branched from master. As we are merging HBASE-18070 back to master, it would be better to merge them as a whole. Unitests are different w/o meta replication source changes in HBASE-18070 as there is no realtime replication of meta wal edits. I simulated that by "flush" and "refresh" hfiles for meta. Would do you think? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25216) The client zk syncer should deal with meta replica count change
[ https://issues.apache.org/jira/browse/HBASE-25216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226346#comment-17226346 ] Hudson commented on HBASE-25216: Results for branch master [build #116 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/116/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/116/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/116/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/116/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The client zk syncer should deal with meta replica count change > --- > > Key: HBASE-25216 > URL: https://issues.apache.org/jira/browse/HBASE-25216 > Project: HBase > Issue Type: Bug > Components: master, Zookeeper >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > The failure of TestSeparateClientZKCluster is because that, we start the zk > syncer before we initialize meta region, and after HBASE-25099, we will scan > zookeeper to get the meta znodes directly instead of checking the config, so > we will get an empty list since there is no meta location on zk yet, thus we > will sync nothing. > But changing the order can not solve everything, as after HBASE-25099, we can > change the meta replica count without restartinig master, so the zk syncer > should have the ability to know the change and start to sync the location for > the new replicas. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25248) Followup jira to create single thread ScheduledExecutorService in AsyncConnImpl, and schedule all these periodic tasks
Huaxiang Sun created HBASE-25248: Summary: Followup jira to create single thread ScheduledExecutorService in AsyncConnImpl, and schedule all these periodic tasks Key: HBASE-25248 URL: https://issues.apache.org/jira/browse/HBASE-25248 Project: HBase Issue Type: Sub-task Reporter: Huaxiang Sun This is a followup Jira for comments in [https://github.com/apache/hbase/pull/2584/commits/d99c2b0ccfd2a57150e984742d097d1e1fcc47b0.] {quote} h4. *[saintstack|https://github.com/saintstack]* [18 hours ago|https://github.com/apache/hbase/pull/2584/commits/d99c2b0ccfd2a57150e984742d097d1e1fcc47b0#r517040579] Member So, implements Stoppable rather than do what the likes of AuthUtil does where it does createDummyStoppable and then has an internal do-nothing Stoppable? Makes sense. Perhaps add comment that it is a do-nothing stop required by ScheduledChore impls. s/isStopped/stopped/ [!https://avatars1.githubusercontent.com/u/62515050?s=60&v=4|width=28,height=28!|https://github.com/huaxiangsun] h4. *[huaxiangsun|https://github.com/huaxiangsun]* [18 hours ago|https://github.com/apache/hbase/pull/2584/commits/d99c2b0ccfd2a57150e984742d097d1e1fcc47b0#r517042290] Author Member Will do. [!https://avatars2.githubusercontent.com/u/45484?s=60&v=4|width=28,height=28!|https://github.com/ndimiduk] h4. *[ndimiduk|https://github.com/ndimiduk]* [17 hours ago|https://github.com/apache/hbase/pull/2584/commits/d99c2b0ccfd2a57150e984742d097d1e1fcc47b0#r517057141] Member Maybe in the future we can put a default empty implementation on the interface, and then implementers who don't need it can ignore it. [!https://avatars3.githubusercontent.com/u/4958168?s=60&u=fc28b222c03c02201d705b025a5293d6c471f7b3&v=4|width=28,height=28!|https://github.com/Apache9] h4. *[Apache9|https://github.com/Apache9]* [17 hours ago|https://github.com/apache/hbase/pull/2584/commits/d99c2b0ccfd2a57150e984742d097d1e1fcc47b0#r517057999] Member Maybe we could just use a ScheduledExecutorService at client side, the ChoreService is designed to be used at server side I believe. Anyway, not a blocker for now. {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] huaxiangsun commented on a change in pull request #2584: HBASE-25126 Add load balance logic in hbase-client to distribute read…
huaxiangsun commented on a change in pull request #2584: URL: https://github.com/apache/hbase/pull/2584#discussion_r517554820 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/CatalogReplicaLoadBalanceSimpleSelector.java ## @@ -0,0 +1,302 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client; + +import static org.apache.hadoop.hbase.client.ConnectionUtils.isEmptyStopRow; +import static org.apache.hadoop.hbase.util.Bytes.BYTES_COMPARATOR; +import static org.apache.hadoop.hbase.util.ConcurrentMapUtils.computeIfAbsent; +import java.util.Iterator; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ConcurrentNavigableMap; +import java.util.concurrent.ConcurrentSkipListMap; +import java.util.concurrent.ThreadLocalRandom; +import java.util.function.IntSupplier; +import org.apache.commons.lang3.builder.ToStringBuilder; +import org.apache.hadoop.hbase.HRegionLocation; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.Stoppable; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * CatalogReplicaLoadBalanceReplicaSimpleSelector implements a simple catalog replica load balancing + * algorithm. It maintains a stale location cache for each table. Whenever client looks up location, + * it first check if the row is the stale location cache. If yes, the location from + * catalog replica is stale, it will go to the primary region to look up update-to-date location; + * otherwise, it will randomly pick up a replica region for lookup. When clients receive + * RegionNotServedException from region servers, it will add these region locations to the stale + * location cache. The stale cache will be cleaned up periodically by a chore. + * + * It follows a simple algorithm to choose a replica to go: + * + * + * If there is no stale location entry for rows it looks up, it will randomly + * pick a replica region to do lookup. + * If the location from the replica region is stale, client gets RegionNotServedException + * from region server, in this case, it will create StaleLocationCacheEntry in + * CatalogReplicaLoadBalanceReplicaSimpleSelector. + * When client tries to do location lookup, it checks StaleLocationCache first for rows it + * tries to lookup, if entry exists, it will go with primary meta region to do lookup; + * otherwise, it will follow step 1. + * A chore will periodically run to clean up cache entries in the StaleLocationCache. + * + */ +class CatalogReplicaLoadBalanceSimpleSelector implements + CatalogReplicaLoadBalanceSelector, Stoppable { + private static final Logger LOG = +LoggerFactory.getLogger(CatalogReplicaLoadBalanceSimpleSelector.class); + private final long STALE_CACHE_TIMEOUT_IN_MILLISECONDS = 3000; // 3 seconds + private final int STALE_CACHE_CLEAN_CHORE_INTERVAL_IN_MILLISECONDS = 1500; // 1.5 seconds + private final int REFRESH_REPLICA_COUNT_CHORE_INTERVAL_IN_MILLISECONDS = 6; // 1 minute + + /** + * StaleLocationCacheEntry is the entry when a stale location is reported by an client. + */ + private static final class StaleLocationCacheEntry { +// replica id where the stale location comes from. +private final int fromReplicaId; + +// timestamp in milliseconds +private final long timestamp; + +private final byte[] endKey; + +StaleLocationCacheEntry(final int metaReplicaId, final byte[] endKey) { + this.fromReplicaId = metaReplicaId; + this.endKey = endKey; + timestamp = EnvironmentEdgeManager.currentTime(); +} + +public byte[] getEndKey() { + return this.endKey; +} + +public int getFromReplicaId() { + return this.fromReplicaId; +} +public long getTimestamp() { + return this.timestamp; +} + +@Override +public String toString() { + return new ToStringBuilder(this) +.append("endKey", endKey) +.append("fromReplicaId"
[jira] [Comment Edited] (HBASE-24749) Direct insert HFiles and Persist in-memory HFile tracking
[ https://issues.apache.org/jira/browse/HBASE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226329#comment-17226329 ] Tak-Lon (Stephen) Wu edited comment on HBASE-24749 at 11/4/20, 6:23 PM: thanks Nick for the information! will do a feature branch and come back to discuss when we should merge. was (Author: taklwu): thanks Nick ! > Direct insert HFiles and Persist in-memory HFile tracking > - > > Key: HBASE-24749 > URL: https://issues.apache.org/jira/browse/HBASE-24749 > Project: HBase > Issue Type: Umbrella > Components: Compaction, HFile >Affects Versions: 3.0.0-alpha-1 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Labels: design, discussion, objectstore, storeFile, storeengine > Attachments: 1B100m-25m25m-performance.pdf, Apache HBase - Direct > insert HFiles and Persist in-memory HFile tracking.pdf > > > We propose a new feature (a new store engine) to remove the {{.tmp}} > directory used in the commit stage for common HFile operations such as flush > and compaction to improve the write throughput and latency on object stores. > Specifically for S3 filesystems, this will also mitigate read-after-write > inconsistencies caused by immediate HFiles validation after moving the > HFile(s) to data directory. > Please see attached for this proposal and the initial result captured with > 25m (25m operations) and 1B (100m operations) YCSB workload A LOAD and RUN, > and workload C RUN result. > The goal of this JIRA is to discuss with the community if the proposed > improvement on the object stores use case makes senses and if we miss > anything should be included. > Improvement Highlights > 1. Lower write latency, especially the p99+ > 2. Higher write throughput on flush and compaction > 3. Lower MTTR on region (re)open or assignment > 4. Remove consistent check dependencies (e.g. DynamoDB) supported by file > system implementation -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24749) Direct insert HFiles and Persist in-memory HFile tracking
[ https://issues.apache.org/jira/browse/HBASE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226329#comment-17226329 ] Tak-Lon (Stephen) Wu commented on HBASE-24749: -- thanks Nick ! > Direct insert HFiles and Persist in-memory HFile tracking > - > > Key: HBASE-24749 > URL: https://issues.apache.org/jira/browse/HBASE-24749 > Project: HBase > Issue Type: Umbrella > Components: Compaction, HFile >Affects Versions: 3.0.0-alpha-1 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Labels: design, discussion, objectstore, storeFile, storeengine > Attachments: 1B100m-25m25m-performance.pdf, Apache HBase - Direct > insert HFiles and Persist in-memory HFile tracking.pdf > > > We propose a new feature (a new store engine) to remove the {{.tmp}} > directory used in the commit stage for common HFile operations such as flush > and compaction to improve the write throughput and latency on object stores. > Specifically for S3 filesystems, this will also mitigate read-after-write > inconsistencies caused by immediate HFiles validation after moving the > HFile(s) to data directory. > Please see attached for this proposal and the initial result captured with > 25m (25m operations) and 1B (100m operations) YCSB workload A LOAD and RUN, > and workload C RUN result. > The goal of this JIRA is to discuss with the community if the proposed > improvement on the object stores use case makes senses and if we miss > anything should be included. > Improvement Highlights > 1. Lower write latency, especially the p99+ > 2. Higher write throughput on flush and compaction > 3. Lower MTTR on region (re)open or assignment > 4. Remove consistent check dependencies (e.g. DynamoDB) supported by file > system implementation -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25247) Followup jira to encap all meta replica mode/selector processing into CatalogReplicaModeManager
Huaxiang Sun created HBASE-25247: Summary: Followup jira to encap all meta replica mode/selector processing into CatalogReplicaModeManager Key: HBASE-25247 URL: https://issues.apache.org/jira/browse/HBASE-25247 Project: HBase Issue Type: Sub-task Components: meta Reporter: Huaxiang Sun Assignee: Huaxiang Sun This is follow up with Stack's comments in [https://github.com/apache/hbase/pull/2584/commits/d99c2b0ccfd2a57150e984742d097d1e1fcc47b0.] {quote} h4. *[saintstack|https://github.com/saintstack]* [6 days ago|https://github.com/apache/hbase/pull/2584/commits/d99c2b0ccfd2a57150e984742d097d1e1fcc47b0#r514558880] Member Yeah, said this before but in follow-on, would be good to shove all this stuff into a CatalogReplicaMode class. Internally this class would figure which policy to run. It would have a method that took a Scan that allowed decorating the Scan w/ whatever the mode needed to implement its policy. Later. [!https://avatars1.githubusercontent.com/u/62515050?s=60&v=4|width=28,height=28!|https://github.com/huaxiangsun] h4. *[huaxiangsun|https://github.com/huaxiangsun]* [6 days ago|https://github.com/apache/hbase/pull/2584/commits/d99c2b0ccfd2a57150e984742d097d1e1fcc47b0#r514587250] Author Member Now I thought about it, it makes sense. Maybe a CatalogReplicaModeManager class which encaps mode and selector? Let me create a followup jira after this is merged. {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] huaxiangsun commented on a change in pull request #2584: HBASE-25126 Add load balance logic in hbase-client to distribute read…
huaxiangsun commented on a change in pull request #2584: URL: https://github.com/apache/hbase/pull/2584#discussion_r517542745 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncNonMetaRegionLocator.java ## @@ -433,9 +474,24 @@ private void locateInMeta(TableName tableName, LocateRequest req) { Scan scan = new Scan().withStartRow(metaStartKey).withStopRow(metaStopKey, true) .addFamily(HConstants.CATALOG_FAMILY).setReversed(true).setCaching(locatePrefetchLimit) .setReadType(ReadType.PREAD); -if (useMetaReplicas) { - scan.setConsistency(Consistency.TIMELINE); + +switch (this.metaReplicaMode) { + case LoadBalance: +int metaReplicaId = this.metaReplicaSelector.select(tableName, req.row, req.locateType); +if (metaReplicaId != RegionInfo.DEFAULT_REPLICA_ID) { + // If the selector gives a non-primary meta replica region, then go with it. + // Otherwise, just go to primary in non-hedgedRead mode. + scan.setConsistency(Consistency.TIMELINE); + scan.setReplicaId(metaReplicaId); +} +break; + case HedgedRead: +scan.setConsistency(Consistency.TIMELINE); +break; + default: +// do nothing Review comment: HBASE-25247 is created as a followup. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24859) Optimize in-memory representation of mapreduce TableSplit objects
[ https://issues.apache.org/jira/browse/HBASE-24859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226321#comment-17226321 ] Hudson commented on HBASE-24859: Results for branch branch-2 [build #93 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/93/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/93/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/93/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/93/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/93/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Optimize in-memory representation of mapreduce TableSplit objects > - > > Key: HBASE-24859 > URL: https://issues.apache.org/jira/browse/HBASE-24859 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 3.0.0-alpha-1, 2.3.3, 1.7.0, 2.4.0, 2.2.7 >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Major > Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0, 2.2.7, 2.3.4 > > Attachments: Screen Shot 2020-08-26 at 8.44.34 AM.png, hbase-24859.png > > > It has been observed that when the table has too many regions, MR jobs > consume a lot of memory in the client. This is because we keep the region > level information in memory and the memory heavy object is TableSplit because > of the Scan object as a part of it. > However, it looks like the TableInputFormat for single table doesn't need to > store the scan object in the TableSplit because we do not use it and all the > splits are expected to have the exact same scan object. In TableInputFormat > we use the scan object directly from the MR conf. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25212) Optionally abort requests in progress after deciding a region should close
[ https://issues.apache.org/jira/browse/HBASE-25212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226322#comment-17226322 ] Hudson commented on HBASE-25212: Results for branch branch-2 [build #93 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/93/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/93/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/93/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/93/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/93/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Optionally abort requests in progress after deciding a region should close > -- > > Key: HBASE-25212 > URL: https://issues.apache.org/jira/browse/HBASE-25212 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0 > > > After deciding a region should be closed, the regionserver will set the > internal region state to closing and wait for all pending requests to > complete, via a rendezvous on the region lock. In closing state the region > will not accept any new requests but requests in progress will be allowed to > complete before the close action takes place. In our production we see > outlier wait times on this lock in excess of several minutes. > During close when there are requests in flight the regionserver is subject to > any conceivable reason for delay, like full scans over large regions, > expensive filtering hierarchies, bugs, or store level performance problems > like slow HDFS. The regionserver should interrupt requests in progress to > facilitate smaller/shorter close times on an opt-in basis. > Optionally, via configuration parameter -- which would be a system wide > default set in hbase-site.xml in common practice but could be overridden in > table schema for per table settings -- interrupt requests in progress holding > the region lock rather than wait for completion of all operations in flight. > Send back NotServingRegionException("region is closing") to the clients of > the interrupted operations, like we do after the write lock is acquired. The > client will transparently relocate the region data and resubmit the aborted > requests per normal retry policy. This can be less disruptive than waiting > for very long times for a region to close in extreme outlier cases (e.g. 50 > minutes). In such extreme cases it is better to abort the regionserver if the > close lock cannot be acquired in a reasonable amount of time, because the > region cannot be made available again until it has closed. > After waiting for all requests to complete then we flush the region's > memstore and finish the close. The flush portion of the close process is out > of scope of this proposal. Under normal conditions the flush portion of the > close completes quickly. It is specifically waits on the close lock that has > been an occasional issue in our production that causes difficulty achieving > 99.99% availability. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] huaxiangsun commented on pull request #2584: HBASE-25126 Add load balance logic in hbase-client to distribute read…
huaxiangsun commented on pull request #2584: URL: https://github.com/apache/hbase/pull/2584#issuecomment-721889440 > Landing this on master is proposed by me as this PR is not related to the server side changes. It can be used in our current code base, without the changes in HBASE-18070. And what's more, the client side code are different between master and branch-2, as on master, we rebuild the sync client on top of async client which makes it much easier to implement this issue but on branch-2, you need to deal with sync client separately. So I suggest we land this on master, and then start backporting to branch-2 ASAP. As I understand, HBASE-18070 is branched from master. As we are merging HBASE-18070 back to master, it would be better to merge them as a whole. Unitests is going to be different w/o meta replication source changes in HBASE-18070 as there is no realtime replication of meta wal edits. I simulated that by "flush" and "refresh" hfiles for meta. Would do you think? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on a change in pull request #2584: HBASE-25126 Add load balance logic in hbase-client to distribute read…
huaxiangsun commented on a change in pull request #2584: URL: https://github.com/apache/hbase/pull/2584#discussion_r517531473 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncNonMetaRegionLocator.java ## @@ -196,8 +201,44 @@ private boolean tryComplete(LocateRequest req, CompletableFuture { +ConnectionConfiguration connConf = new ConnectionConfiguration(conn.getConfiguration()); Review comment: Oh, I thought I have checked. Just found out AsyncConnectionConfiguration in AsyncConnectionImpl, so no need to recreate the connConf, thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-25246) Backup/Restore hbase cell tags.
[ https://issues.apache.org/jira/browse/HBASE-25246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226311#comment-17226311 ] Rushabh Shah edited comment on HBASE-25246 at 11/4/20, 5:56 PM: Cc [~apurtell] [~gjacoby] was (Author: shahrs87): Cc [~andrew.purt...@gmail.com] [~gjacoby] > Backup/Restore hbase cell tags. > --- > > Key: HBASE-25246 > URL: https://issues.apache.org/jira/browse/HBASE-25246 > Project: HBase > Issue Type: Improvement > Components: backup&restore >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > In PHOENIX-6213 we are planning to add cell tags for Delete mutations. After > having a discussion with hbase community via dev mailing thread, it was > decided that we will pass the tags via an attribute in Mutation object and > persist them to hbase via phoenix co-processor. The intention of PHOENIX-6213 > is to store metadata in Delete marker so that while running Restore tool we > can selectively restore certain Delete markers and ignore others. For that to > happen we need to persist these tags in Backup and retrieve them in Restore > MR jobs (Import/Export tool). > Currently we don't persist the tags in Backup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25246) Backup/Restore hbase cell tags.
[ https://issues.apache.org/jira/browse/HBASE-25246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226311#comment-17226311 ] Rushabh Shah commented on HBASE-25246: -- Cc [~andrew.purt...@gmail.com] [~gjacoby] > Backup/Restore hbase cell tags. > --- > > Key: HBASE-25246 > URL: https://issues.apache.org/jira/browse/HBASE-25246 > Project: HBase > Issue Type: Improvement > Components: backup&restore >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > In PHOENIX-6213 we are planning to add cell tags for Delete mutations. After > having a discussion with hbase community via dev mailing thread, it was > decided that we will pass the tags via an attribute in Mutation object and > persist them to hbase via phoenix co-processor. The intention of PHOENIX-6213 > is to store metadata in Delete marker so that while running Restore tool we > can selectively restore certain Delete markers and ignore others. For that to > happen we need to persist these tags in Backup and retrieve them in Restore > MR jobs (Import/Export tool). > Currently we don't persist the tags in Backup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25246) Backup/Restore hbase cell tags.
Rushabh Shah created HBASE-25246: Summary: Backup/Restore hbase cell tags. Key: HBASE-25246 URL: https://issues.apache.org/jira/browse/HBASE-25246 Project: HBase Issue Type: Improvement Components: backup&restore Reporter: Rushabh Shah Assignee: Rushabh Shah In PHOENIX-6213 we are planning to add cell tags for Delete mutations. After having a discussion with hbase community via dev mailing thread, it was decided that we will pass the tags via an attribute in Mutation object and persist them to hbase via phoenix co-processor. The intention of PHOENIX-6213 is to store metadata in Delete marker so that while running Restore tool we can selectively restore certain Delete markers and ignore others. For that to happen we need to persist these tags in Backup and retrieve them in Restore MR jobs (Import/Export tool). Currently we don't persist the tags in Backup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25238) Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”
[ https://issues.apache.org/jira/browse/HBASE-25238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226306#comment-17226306 ] Pankaj Kumar commented on HBASE-25238: -- {quote}Can change the proto fields to be optional so upgrades {quote} Make sense. > Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing > required fields: state” > - > > Key: HBASE-25238 > URL: https://issues.apache.org/jira/browse/HBASE-25238 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Zhuqi Jin >Priority: Critical > > When we upgraded HBASE cluster from 2.0.0-RC0 to 2.3.0 or 2.3.3, the HMaster > on upgraded node failed to start. > The error message is shown below: > {code:java} > 2020-11-02 23:04:01,998 ERROR [master/2c4006997f99:16000:becomeActiveMaster] > master.HMaster: Failed to become active > masterorg.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) > at > org.apache.hbase.thirdparty.com.google.protobuf.Any.unpack(Any.java:228) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil$StateSerializer.deserialize(ProcedureUtil.java:124) > at > org.apache.hadoop.hbase.master.assignment.RegionRemoteProcedureBase.deserializeStateData(RegionRemoteProcedureBase.java:352) > at > org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure.deserializeStateData(OpenRegionProcedure.java:72) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:294) > at > org.apache.hadoop.hbase.procedure2.store.ProtoAndProcedure.getProcedure(ProtoAndProcedure.java:43) > at > org.apache.hadoop.hbase.procedure2.store.InMemoryProcedureIterator.next(InMemoryProcedureIterator.java:90) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore$1.load(RegionProcedureStore.java:194) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$2.load(WALProcedureStore.java:474) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:151) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:103) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:465) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.tryMigrate(RegionProcedureStore.java:184) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:257) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587) > at > org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1572) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:950) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240) > at > org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622) > at java.lang.Thread.run(Thread.java:748)2020-11-02 23:04:01,998 ERROR > [master/2c4006997f99:16000:becomeActiveMaster] master.HMaster: * ABORTING > master 2c4006997f99,16000,1604358237412: Unhandled exception. Starting > shutdown. > *org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125) >
[jira] [Commented] (HBASE-24186) RegionMover ignores replicationId
[ https://issues.apache.org/jira/browse/HBASE-24186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226293#comment-17226293 ] Michael Stack commented on HBASE-24186: --- Not important but just a note to say I reverted this patch from branch-2.0 too... It broke its build (I'm testing migration so was trying to build branch-2.0 and found this). This matches the observation above by [~ram_krish] > RegionMover ignores replicationId > - > > Key: HBASE-24186 > URL: https://issues.apache.org/jira/browse/HBASE-24186 > Project: HBase > Issue Type: Bug > Components: read replicas >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.5 > > > When RegionMover looks up which rs hosts a region, it does this based on > startRowKey. When read replication is enabled this might not return the > expected region's data and this can prevent the moving of these regions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25238) Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”
[ https://issues.apache.org/jira/browse/HBASE-25238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226288#comment-17226288 ] Michael Stack commented on HBASE-25238: --- Marking this issue critical. Can change the proto fields to be optional so upgrades. Let me make a patch. Thanks for linking HBASE-25234 [~pankajkumar] . Let me fix that too. > Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing > required fields: state” > - > > Key: HBASE-25238 > URL: https://issues.apache.org/jira/browse/HBASE-25238 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Zhuqi Jin >Priority: Critical > > When we upgraded HBASE cluster from 2.0.0-RC0 to 2.3.0 or 2.3.3, the HMaster > on upgraded node failed to start. > The error message is shown below: > {code:java} > 2020-11-02 23:04:01,998 ERROR [master/2c4006997f99:16000:becomeActiveMaster] > master.HMaster: Failed to become active > masterorg.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) > at > org.apache.hbase.thirdparty.com.google.protobuf.Any.unpack(Any.java:228) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil$StateSerializer.deserialize(ProcedureUtil.java:124) > at > org.apache.hadoop.hbase.master.assignment.RegionRemoteProcedureBase.deserializeStateData(RegionRemoteProcedureBase.java:352) > at > org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure.deserializeStateData(OpenRegionProcedure.java:72) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:294) > at > org.apache.hadoop.hbase.procedure2.store.ProtoAndProcedure.getProcedure(ProtoAndProcedure.java:43) > at > org.apache.hadoop.hbase.procedure2.store.InMemoryProcedureIterator.next(InMemoryProcedureIterator.java:90) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore$1.load(RegionProcedureStore.java:194) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$2.load(WALProcedureStore.java:474) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:151) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:103) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:465) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.tryMigrate(RegionProcedureStore.java:184) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:257) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587) > at > org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1572) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:950) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240) > at > org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622) > at java.lang.Thread.run(Thread.java:748)2020-11-02 23:04:01,998 ERROR > [master/2c4006997f99:16000:becomeActiveMaster] master.HMaster: * ABORTING > master 2c4006997f99,16000,1604358237412: Unhandled exception. Starting > shutdown. > *org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120) > at > org.a
[jira] [Commented] (HBASE-24859) Optimize in-memory representation of mapreduce TableSplit objects
[ https://issues.apache.org/jira/browse/HBASE-24859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226274#comment-17226274 ] Hudson commented on HBASE-24859: Results for branch branch-2.3 [build #99 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/99/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/99/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/99/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/99/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/99/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Optimize in-memory representation of mapreduce TableSplit objects > - > > Key: HBASE-24859 > URL: https://issues.apache.org/jira/browse/HBASE-24859 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 3.0.0-alpha-1, 2.3.3, 1.7.0, 2.4.0, 2.2.7 >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Major > Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0, 2.2.7, 2.3.4 > > Attachments: Screen Shot 2020-08-26 at 8.44.34 AM.png, hbase-24859.png > > > It has been observed that when the table has too many regions, MR jobs > consume a lot of memory in the client. This is because we keep the region > level information in memory and the memory heavy object is TableSplit because > of the Scan object as a part of it. > However, it looks like the TableInputFormat for single table doesn't need to > store the scan object in the TableSplit because we do not use it and all the > splits are expected to have the exact same scan object. In TableInputFormat > we use the scan object directly from the MR conf. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk commented on pull request #2624: HBASE-25245 : Fixing incorrect maven and jdk names for generate-hbase-website
ndimiduk commented on pull request #2624: URL: https://github.com/apache/hbase/pull/2624#issuecomment-721842037 Thank you @virajjasani This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25238) Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”
[ https://issues.apache.org/jira/browse/HBASE-25238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-25238: -- Priority: Critical (was: Major) > Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing > required fields: state” > - > > Key: HBASE-25238 > URL: https://issues.apache.org/jira/browse/HBASE-25238 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Zhuqi Jin >Priority: Critical > > When we upgraded HBASE cluster from 2.0.0-RC0 to 2.3.0 or 2.3.3, the HMaster > on upgraded node failed to start. > The error message is shown below: > {code:java} > 2020-11-02 23:04:01,998 ERROR [master/2c4006997f99:16000:becomeActiveMaster] > master.HMaster: Failed to become active > masterorg.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) > at > org.apache.hbase.thirdparty.com.google.protobuf.Any.unpack(Any.java:228) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil$StateSerializer.deserialize(ProcedureUtil.java:124) > at > org.apache.hadoop.hbase.master.assignment.RegionRemoteProcedureBase.deserializeStateData(RegionRemoteProcedureBase.java:352) > at > org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure.deserializeStateData(OpenRegionProcedure.java:72) > at > org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:294) > at > org.apache.hadoop.hbase.procedure2.store.ProtoAndProcedure.getProcedure(ProtoAndProcedure.java:43) > at > org.apache.hadoop.hbase.procedure2.store.InMemoryProcedureIterator.next(InMemoryProcedureIterator.java:90) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore$1.load(RegionProcedureStore.java:194) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$2.load(WALProcedureStore.java:474) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:151) > at > org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:103) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:465) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.tryMigrate(RegionProcedureStore.java:184) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:257) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587) > at > org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1572) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:950) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240) > at > org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622) > at java.lang.Thread.run(Thread.java:748)2020-11-02 23:04:01,998 ERROR > [master/2c4006997f99:16000:becomeActiveMaster] master.HMaster: * ABORTING > master 2c4006997f99,16000,1604358237412: Unhandled exception. Starting > shutdown. > *org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: > Message missing required fields: state at > org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java
[jira] [Resolved] (HBASE-25053) WAL replay should ignore 0-length files
[ https://issues.apache.org/jira/browse/HBASE-25053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-25053. --- Hadoop Flags: Reviewed Resolution: Fixed Merged to branch-2 and master. Thanks for patch [~niuyulin] . Thanks for reviews [~zhangduo] and [~vjasani] > WAL replay should ignore 0-length files > --- > > Key: HBASE-25053 > URL: https://issues.apache.org/jira/browse/HBASE-25053 > Project: HBase > Issue Type: Bug > Components: master, regionserver >Affects Versions: 2.3.1 >Reporter: Nick Dimiduk >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > I overdrove a small testing cluster, filling HDFS. After cleaning up data to > bring HBase back up, I noticed all masters -refused to start- abort. Logs > complain of seeking past EOF. Indeed the last wal file name logged is a > 0-length file. WAL replay should gracefully skip and clean up such an empty > file. > {noformat} > 2020-09-16 19:51:30,297 ERROR org.apache.hadoop.hbase.master.HMaster: Failed > to become active master > java.io.EOFException: Cannot seek after EOF > at > org.apache.hadoop.hdfs.DFSInputStream.seek(DFSInputStream.java:1448) > at > org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:66) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:211) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initReader(ProtobufLogReader.java:173) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:64) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:168) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:323) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:305) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:429) > at > org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4859) > at > org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4765) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1014) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:956) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7496) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7454) > at > org.apache.hadoop.hbase.master.region.MasterRegion.open(MasterRegion.java:269) > at > org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:309) > at > org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:949) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240) > at > org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622) > at java.base/java.lang.Thread.run(Thread.java:834) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25053) WAL replay should ignore 0-length files
[ https://issues.apache.org/jira/browse/HBASE-25053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-25053: -- Fix Version/s: 2.4.0 3.0.0-alpha-1 > WAL replay should ignore 0-length files > --- > > Key: HBASE-25053 > URL: https://issues.apache.org/jira/browse/HBASE-25053 > Project: HBase > Issue Type: Bug > Components: master, regionserver >Affects Versions: 2.3.1 >Reporter: Nick Dimiduk >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > I overdrove a small testing cluster, filling HDFS. After cleaning up data to > bring HBase back up, I noticed all masters -refused to start- abort. Logs > complain of seeking past EOF. Indeed the last wal file name logged is a > 0-length file. WAL replay should gracefully skip and clean up such an empty > file. > {noformat} > 2020-09-16 19:51:30,297 ERROR org.apache.hadoop.hbase.master.HMaster: Failed > to become active master > java.io.EOFException: Cannot seek after EOF > at > org.apache.hadoop.hdfs.DFSInputStream.seek(DFSInputStream.java:1448) > at > org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:66) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:211) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initReader(ProtobufLogReader.java:173) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:64) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:168) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:323) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:305) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:429) > at > org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4859) > at > org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4765) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1014) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:956) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7496) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7454) > at > org.apache.hadoop.hbase.master.region.MasterRegion.open(MasterRegion.java:269) > at > org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:309) > at > org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:949) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240) > at > org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622) > at java.base/java.lang.Thread.run(Thread.java:834) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack commented on pull request #2437: HBASE-25053 WAL replay should ignore 0-length files
saintstack commented on pull request #2437: URL: https://github.com/apache/hbase/pull/2437#issuecomment-721830912 @nyl3532016 Sorry for delay. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack merged pull request #2437: HBASE-25053 WAL replay should ignore 0-length files
saintstack merged pull request #2437: URL: https://github.com/apache/hbase/pull/2437 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25210) RegionInfo.isOffline is now a duplication with RegionInfo.isSplit
[ https://issues.apache.org/jira/browse/HBASE-25210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-25210: -- Fix Version/s: 2.4.0 > RegionInfo.isOffline is now a duplication with RegionInfo.isSplit > - > > Key: HBASE-25210 > URL: https://issues.apache.org/jira/browse/HBASE-25210 > Project: HBase > Issue Type: Improvement > Components: meta >Reporter: Duo Zhang >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > The only place, where we set it to true is in splitRegion, and at the same > time we will set split to true. > So in general, I suggest that we deprecated isOffline and isSplitParent in > RegionInfo, only leave the isSplit method. And in RegionInfoBuilder, we > deprecated setOffline and only leave the setSplit method. > This could make our code base cleaner. > And for serialization compatibility, we'd better still keep the split and > offline fields in the actual RegionInfo datastructure for a while. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (HBASE-25210) RegionInfo.isOffline is now a duplication with RegionInfo.isSplit
[ https://issues.apache.org/jira/browse/HBASE-25210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack reopened HBASE-25210: --- Reopen to backport (I think it a good idea... nice deprecations only...) > RegionInfo.isOffline is now a duplication with RegionInfo.isSplit > - > > Key: HBASE-25210 > URL: https://issues.apache.org/jira/browse/HBASE-25210 > Project: HBase > Issue Type: Improvement > Components: meta >Reporter: Duo Zhang >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1 > > > The only place, where we set it to true is in splitRegion, and at the same > time we will set split to true. > So in general, I suggest that we deprecated isOffline and isSplitParent in > RegionInfo, only leave the isSplit method. And in RegionInfoBuilder, we > deprecated setOffline and only leave the setSplit method. > This could make our code base cleaner. > And for serialization compatibility, we'd better still keep the split and > offline fields in the actual RegionInfo datastructure for a while. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25210) RegionInfo.isOffline is now a duplication with RegionInfo.isSplit
[ https://issues.apache.org/jira/browse/HBASE-25210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-25210. --- Resolution: Fixed Re-closing after backport to branch-2. > RegionInfo.isOffline is now a duplication with RegionInfo.isSplit > - > > Key: HBASE-25210 > URL: https://issues.apache.org/jira/browse/HBASE-25210 > Project: HBase > Issue Type: Improvement > Components: meta >Reporter: Duo Zhang >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > The only place, where we set it to true is in splitRegion, and at the same > time we will set split to true. > So in general, I suggest that we deprecated isOffline and isSplitParent in > RegionInfo, only leave the isSplit method. And in RegionInfoBuilder, we > deprecated setOffline and only leave the setSplit method. > This could make our code base cleaner. > And for serialization compatibility, we'd better still keep the split and > offline fields in the actual RegionInfo datastructure for a while. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] JeongDaeKim commented on pull request #2602: HBASE-25229 Instantiate BucketCache before RSs create their ephemeral nodes
JeongDaeKim commented on pull request #2602: URL: https://github.com/apache/hbase/pull/2602#issuecomment-721811827 Addressed checkstyle violation. It seems all tests passed. Failed tests at the second build are not relevant to my changes. - hadoop.hbase.security.visibility.TestVisibilityLabelsWithACL - hadoop.hbase.regionserver.TestSplitTransactionOnCluster This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25212) Optionally abort requests in progress after deciding a region should close
[ https://issues.apache.org/jira/browse/HBASE-25212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226237#comment-17226237 ] Hudson commented on HBASE-25212: Results for branch branch-1 [build #52 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/52/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/52//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/52//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/52//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 source release artifact{color} -- See build output for details. > Optionally abort requests in progress after deciding a region should close > -- > > Key: HBASE-25212 > URL: https://issues.apache.org/jira/browse/HBASE-25212 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0 > > > After deciding a region should be closed, the regionserver will set the > internal region state to closing and wait for all pending requests to > complete, via a rendezvous on the region lock. In closing state the region > will not accept any new requests but requests in progress will be allowed to > complete before the close action takes place. In our production we see > outlier wait times on this lock in excess of several minutes. > During close when there are requests in flight the regionserver is subject to > any conceivable reason for delay, like full scans over large regions, > expensive filtering hierarchies, bugs, or store level performance problems > like slow HDFS. The regionserver should interrupt requests in progress to > facilitate smaller/shorter close times on an opt-in basis. > Optionally, via configuration parameter -- which would be a system wide > default set in hbase-site.xml in common practice but could be overridden in > table schema for per table settings -- interrupt requests in progress holding > the region lock rather than wait for completion of all operations in flight. > Send back NotServingRegionException("region is closing") to the clients of > the interrupted operations, like we do after the write lock is acquired. The > client will transparently relocate the region data and resubmit the aborted > requests per normal retry policy. This can be less disruptive than waiting > for very long times for a region to close in extreme outlier cases (e.g. 50 > minutes). In such extreme cases it is better to abort the regionserver if the > close lock cannot be acquired in a reasonable amount of time, because the > region cannot be made available again until it has closed. > After waiting for all requests to complete then we flush the region's > memstore and finish the close. The flush portion of the close process is out > of scope of this proposal. Under normal conditions the flush portion of the > close completes quickly. It is specifically waits on the close lock that has > been an occasional issue in our production that causes difficulty achieving > 99.99% availability. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25218) Release 2.3.3
[ https://issues.apache.org/jira/browse/HBASE-25218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani resolved HBASE-25218. -- Fix Version/s: 3.0.0-alpha-1 Hadoop Flags: Reviewed Resolution: Fixed > Release 2.3.3 > - > > Key: HBASE-25218 > URL: https://issues.apache.org/jira/browse/HBASE-25218 > Project: HBase > Issue Type: Task > Components: community >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1 > > > Sub-tasks involved: > # Spin RCs > # "Release" staged nexus repository > # Release version 2.3.3 in Jira > # Promote 2.3.3 RC artifacts in svn > # Update reporter tool with new release > # Push signed release tag > # Add 2.3.3 to the downloads page > # Send announce email -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2535: HBASE-25116 RegionMonitor support RegionTask count normalize
Apache-HBase commented on pull request #2535: URL: https://github.com/apache/hbase/pull/2535#issuecomment-721788354 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 14s | master passed | | +1 :green_heart: | compile | 1m 59s | master passed | | +1 :green_heart: | shadedjars | 8m 18s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 13s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 12s | the patch passed | | +1 :green_heart: | compile | 1m 52s | the patch passed | | +1 :green_heart: | javac | 1m 52s | the patch passed | | +1 :green_heart: | shadedjars | 7m 36s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 4s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 57s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 198m 15s | hbase-server in the patch passed. | | | | 236m 42s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2535/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2535 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 0a9a1aa65fb5 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4bd9ee43a4 | | Default Java | AdoptOpenJDK-11.0.6+10 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2535/4/testReport/ | | Max. process+thread count | 3020 (vs. ulimit of 3) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2535/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2623: HBASE-25240 gson format of RpcServer.logResponse is abnormal
Apache-HBase commented on pull request #2623: URL: https://github.com/apache/hbase/pull/2623#issuecomment-721778328 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 9s | master passed | | +1 :green_heart: | compile | 1m 35s | master passed | | +1 :green_heart: | shadedjars | 6m 47s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 3s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 6s | the patch passed | | +1 :green_heart: | compile | 1m 37s | the patch passed | | +1 :green_heart: | javac | 1m 37s | the patch passed | | +1 :green_heart: | shadedjars | 6m 48s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 4s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 46s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 139m 58s | hbase-server in the patch passed. | | | | 172m 27s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2623 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux f9ec83bd82c2 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4bd9ee43a4 | | Default Java | AdoptOpenJDK-11.0.6+10 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/4/testReport/ | | Max. process+thread count | 4167 (vs. ulimit of 3) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2623: HBASE-25240 gson format of RpcServer.logResponse is abnormal
Apache-HBase commented on pull request #2623: URL: https://github.com/apache/hbase/pull/2623#issuecomment-721777648 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 32s | master passed | | +1 :green_heart: | compile | 1m 19s | master passed | | +1 :green_heart: | shadedjars | 6m 29s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 1s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 27s | the patch passed | | +1 :green_heart: | compile | 1m 21s | the patch passed | | +1 :green_heart: | javac | 1m 21s | the patch passed | | +1 :green_heart: | shadedjars | 6m 34s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 58s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 23s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 141m 37s | hbase-server in the patch passed. | | | | 171m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2623 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux b800b0781478 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4bd9ee43a4 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/4/testReport/ | | Max. process+thread count | 4922 (vs. ulimit of 3) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-25245) hbase_generate_website is failing due to incorrect jdk and maven syntax
[ https://issues.apache.org/jira/browse/HBASE-25245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani resolved HBASE-25245. -- Fix Version/s: 3.0.0-alpha-1 Hadoop Flags: Reviewed Resolution: Fixed > hbase_generate_website is failing due to incorrect jdk and maven syntax > --- > > Key: HBASE-25245 > URL: https://issues.apache.org/jira/browse/HBASE-25245 > Project: HBase > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1 > > > While waiting for HBase download page to reflect new release and during an > offline syncup with [~ndimiduk] , realized that generate website is failing > for quite some time now, e.g > https://ci-hadoop.apache.org/job/HBase/job/hbase_generate_website/80/ > {code:java} > Obtained dev-support/jenkins-scripts/generate-hbase-website.Jenkinsfile from > git https://gitbox.apache.org/repos/asf/hbase.git > Running in Durability level: PERFORMANCE_OPTIMIZED > org.codehaus.groovy.control.MultipleCompilationErrorsException: startup > failed: > WorkflowScript: 40: Tool type "maven" does not have an install of "Maven > (latest)" configured - did you mean "maven_latest"? @ line 40, column 15. >maven 'Maven (latest)' > ^ > WorkflowScript: 42: Tool type "jdk" does not have an install of "JDK 1.8 > (latest)" configured - did you mean "jdk_1.8_latest"? @ line 42, column 13. >jdk "JDK 1.8 (latest)" >^ > {code} > We might have to apply fix similar to HBASE-25204 to generate website > specific Jenkinsfile. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2602: HBASE-25229 Instantiate BucketCache before RSs create their ephemeral nodes
Apache-HBase commented on pull request #2602: URL: https://github.com/apache/hbase/pull/2602#issuecomment-721774898 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ branch-1 Compile Tests _ | | +1 :green_heart: | mvninstall | 9m 50s | branch-1 passed | | +1 :green_heart: | compile | 0m 42s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | compile | 0m 44s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | checkstyle | 1m 43s | branch-1 passed | | +1 :green_heart: | shadedjars | 3m 5s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 50s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 0m 42s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +0 :ok: | spotbugs | 3m 5s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 1s | branch-1 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 55s | the patch passed | | +1 :green_heart: | compile | 0m 41s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javac | 0m 41s | the patch passed | | +1 :green_heart: | compile | 0m 44s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | javac | 0m 44s | the patch passed | | +1 :green_heart: | checkstyle | 1m 34s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 2m 48s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 4m 34s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2. | | +1 :green_heart: | javadoc | 0m 32s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 0m 42s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | findbugs | 3m 29s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 104m 30s | hbase-server in the patch failed. | | +1 :green_heart: | asflicense | 0m 34s | The patch does not generate ASF License warnings. | | | | 147m 3s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2602/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2602 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux a501b20e11be 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2602/out/precommit/personality/provided.sh | | git revision | branch-1 / 6626cc1 | | Default Java | Azul Systems, Inc.-1.7.0_272-b10 | | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2602/4/artifact/out/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2602/4/testReport/ | | Max. process+thread count | 3300 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2602/4/console | | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2535: HBASE-25116 RegionMonitor support RegionTask count normalize
Apache-HBase commented on pull request #2535: URL: https://github.com/apache/hbase/pull/2535#issuecomment-721754768 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 39s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 53s | master passed | | +1 :green_heart: | compile | 1m 42s | master passed | | +1 :green_heart: | shadedjars | 8m 46s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 8s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 18s | the patch passed | | +1 :green_heart: | compile | 1m 39s | the patch passed | | +1 :green_heart: | javac | 1m 39s | the patch passed | | +1 :green_heart: | shadedjars | 8m 25s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 19s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 45s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 141m 42s | hbase-server in the patch passed. | | | | 179m 30s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2535/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2535 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 7762dd1f05a1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4bd9ee43a4 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2535/4/testReport/ | | Max. process+thread count | 4566 (vs. ulimit of 3) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2535/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #2623: HBASE-25240 gson format of RpcServer.logResponse is abnormal
virajjasani commented on a change in pull request #2623: URL: https://github.com/apache/hbase/pull/2623#discussion_r517324513 ## File path: hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestGsonUtil.java ## @@ -0,0 +1,58 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.util; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +import org.apache.hadoop.hbase.HBaseClassTestRule; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hbase.thirdparty.com.google.gson.Gson; +import org.junit.ClassRule; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +@Category({ MiscTests.class, SmallTests.class }) +public class TestGsonUtil { + + @ClassRule + public static final HBaseClassTestRule CLASS_RULE = +HBaseClassTestRule.forClass(TestGsonUtil.class); + + private static final Gson GSON = GsonUtil.createGson().create(); + private static final Gson DHE_GSON = GsonUtil.createGsonWithDisableHtmlEscaping().create(); + + @Test + public void testDisableHtmlEscaping() { +String testStr = "==="; + +// disable html escaping +String json = DHE_GSON.toJson(testStr); +assertTrue(json.startsWith("\"") && json.endsWith("\"")); +assertEquals(testStr.length() + 2, json.length()); +assertEquals(testStr, json.substring(1, json.length() - 1)); Review comment: One more assert or maybe replace with this one: ``` assertEquals("\"===\"", json); ``` ## File path: hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestGsonUtil.java ## @@ -0,0 +1,58 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.util; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +import org.apache.hadoop.hbase.HBaseClassTestRule; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hbase.thirdparty.com.google.gson.Gson; +import org.junit.ClassRule; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +@Category({ MiscTests.class, SmallTests.class }) +public class TestGsonUtil { + + @ClassRule + public static final HBaseClassTestRule CLASS_RULE = +HBaseClassTestRule.forClass(TestGsonUtil.class); + + private static final Gson GSON = GsonUtil.createGson().create(); + private static final Gson DHE_GSON = GsonUtil.createGsonWithDisableHtmlEscaping().create(); + + @Test + public void testDisableHtmlEscaping() { +String testStr = "==="; + +// disable html escaping +String json = DHE_GSON.toJson(testStr); +assertTrue(json.startsWith("\"") && json.endsWith("\"")); +assertEquals(testStr.length() + 2, json.length()); +assertEquals(testStr, json.substring(1, json.length() - 1)); + +// enable html escaping, turn '=' into '\u003d' +json = GSON.toJson(testStr); +assertTrue(json.startsWith("\"") && json.endsWith("\"")); +assertEquals(testStr.length() * 6 + 2, json.length()); +assertEquals(testStr.replaceAll("=", "u003d"), + json.substring(1, json.length() - 1)); Review comment: One more assert or maybe replace with this one: ``` assertEquals("\"\\u003d\\u003d\\u003d\"", json); ``` -
[GitHub] [hbase] Apache-HBase commented on pull request #2623: HBASE-25240 gson format of RpcServer.logResponse is abnormal
Apache-HBase commented on pull request #2623: URL: https://github.com/apache/hbase/pull/2623#issuecomment-721711375 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 36s | master passed | | +1 :green_heart: | checkstyle | 1m 30s | master passed | | +1 :green_heart: | spotbugs | 2m 46s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 27s | the patch passed | | -0 :warning: | checkstyle | 0m 23s | hbase-common: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 54s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 4m 10s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 46m 25s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/4/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2623 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux db2ce714ecc6 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4bd9ee43a4 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-common.txt | | Max. process+thread count | 94 (vs. ulimit of 3) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/4/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2535: HBASE-25116 RegionMonitor support RegionTask count normalize
Apache-HBase commented on pull request #2535: URL: https://github.com/apache/hbase/pull/2535#issuecomment-721689439 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 52s | master passed | | +1 :green_heart: | checkstyle | 1m 27s | master passed | | +1 :green_heart: | spotbugs | 2m 44s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 30s | the patch passed | | +1 :green_heart: | checkstyle | 1m 26s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 11s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 3m 6s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 43m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2535/4/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2535 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 35c7a55537c5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4bd9ee43a4 | | Max. process+thread count | 94 (vs. ulimit of 3) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2535/4/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25244) Support splitting a region into N parts at a time
[ https://issues.apache.org/jira/browse/HBASE-25244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuqi updated HBASE-25244: -- Description: In the current reference file format, only one parent region split into two references can be recorded. At this time, if you want to continue splitting the daughter region, you must wait until the majorCompaction is over and the reference file is deleted before you can continue to split the region. If the reference file can point to other refenrence files, which data has not been moved from the parentRegion to the region under the corresponding folder, thereby establishing a multi-level reference. At this time, a tree structure is formed. Only the root contains physical data, and the region on the leaf node region is serving. was: In the current reference file format, only one parent region split into two references can be recorded. At this time, if you want to continue splitting the daughter region, you must wait until the majorCompaction is over and the reference file is deleted before you can continue to split the region. If the reference file can point to other refenrence files, that is, the data has not been moved from the parentRegion to the region under the corresponding folder, thereby establishing a multi-level reference. At this time, a tree structure is formed. Only the root contains physical data, and the region on the leaf node region is serving. > Support splitting a region into N parts at a time > - > > Key: HBASE-25244 > URL: https://issues.apache.org/jira/browse/HBASE-25244 > Project: HBase > Issue Type: New Feature > Components: regionserver >Reporter: zhuqi >Assignee: zhuqi >Priority: Major > > In the current reference file format, only one parent region split into two > references can be recorded. At this time, if you want to continue splitting > the daughter region, you must wait until the majorCompaction is over and the > reference file is deleted before you can continue to split the region. > If the reference file can point to other refenrence files, which data has not > been moved from the parentRegion to the region under the corresponding > folder, thereby establishing a multi-level reference. At this time, a tree > structure is formed. Only the root contains physical data, and the region on > the leaf node region is serving. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24859) Optimize in-memory representation of mapreduce TableSplit objects
[ https://issues.apache.org/jira/browse/HBASE-24859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17225976#comment-17225976 ] Hudson commented on HBASE-24859: Results for branch master [build #115 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Optimize in-memory representation of mapreduce TableSplit objects > - > > Key: HBASE-24859 > URL: https://issues.apache.org/jira/browse/HBASE-24859 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 3.0.0-alpha-1, 2.3.3, 1.7.0, 2.4.0, 2.2.7 >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Major > Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0, 2.2.7, 2.3.4 > > Attachments: Screen Shot 2020-08-26 at 8.44.34 AM.png, hbase-24859.png > > > It has been observed that when the table has too many regions, MR jobs > consume a lot of memory in the client. This is because we keep the region > level information in memory and the memory heavy object is TableSplit because > of the Scan object as a part of it. > However, it looks like the TableInputFormat for single table doesn't need to > store the scan object in the TableSplit because we do not use it and all the > splits are expected to have the exact same scan object. In TableInputFormat > we use the scan object directly from the MR conf. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25212) Optionally abort requests in progress after deciding a region should close
[ https://issues.apache.org/jira/browse/HBASE-25212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17225978#comment-17225978 ] Hudson commented on HBASE-25212: Results for branch master [build #115 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Optionally abort requests in progress after deciding a region should close > -- > > Key: HBASE-25212 > URL: https://issues.apache.org/jira/browse/HBASE-25212 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0 > > > After deciding a region should be closed, the regionserver will set the > internal region state to closing and wait for all pending requests to > complete, via a rendezvous on the region lock. In closing state the region > will not accept any new requests but requests in progress will be allowed to > complete before the close action takes place. In our production we see > outlier wait times on this lock in excess of several minutes. > During close when there are requests in flight the regionserver is subject to > any conceivable reason for delay, like full scans over large regions, > expensive filtering hierarchies, bugs, or store level performance problems > like slow HDFS. The regionserver should interrupt requests in progress to > facilitate smaller/shorter close times on an opt-in basis. > Optionally, via configuration parameter -- which would be a system wide > default set in hbase-site.xml in common practice but could be overridden in > table schema for per table settings -- interrupt requests in progress holding > the region lock rather than wait for completion of all operations in flight. > Send back NotServingRegionException("region is closing") to the clients of > the interrupted operations, like we do after the write lock is acquired. The > client will transparently relocate the region data and resubmit the aborted > requests per normal retry policy. This can be less disruptive than waiting > for very long times for a region to close in extreme outlier cases (e.g. 50 > minutes). In such extreme cases it is better to abort the regionserver if the > close lock cannot be acquired in a reasonable amount of time, because the > region cannot be made available again until it has closed. > After waiting for all requests to complete then we flush the region's > memstore and finish the close. The flush portion of the close process is out > of scope of this proposal. Under normal conditions the flush portion of the > close completes quickly. It is specifically waits on the close lock that has > been an occasional issue in our production that causes difficulty achieving > 99.99% availability. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25210) RegionInfo.isOffline is now a duplication with RegionInfo.isSplit
[ https://issues.apache.org/jira/browse/HBASE-25210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17225977#comment-17225977 ] Hudson commented on HBASE-25210: Results for branch master [build #115 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RegionInfo.isOffline is now a duplication with RegionInfo.isSplit > - > > Key: HBASE-25210 > URL: https://issues.apache.org/jira/browse/HBASE-25210 > Project: HBase > Issue Type: Improvement > Components: meta >Reporter: Duo Zhang >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1 > > > The only place, where we set it to true is in splitRegion, and at the same > time we will set split to true. > So in general, I suggest that we deprecated isOffline and isSplitParent in > RegionInfo, only leave the isSplit method. And in RegionInfoBuilder, we > deprecated setOffline and only leave the setSplit method. > This could make our code base cleaner. > And for serialization compatibility, we'd better still keep the split and > offline fields in the actual RegionInfo datastructure for a while. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25235) Cleanup the deprecated methods in TimeRange
[ https://issues.apache.org/jira/browse/HBASE-25235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17225979#comment-17225979 ] Hudson commented on HBASE-25235: Results for branch master [build #115 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/115/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Cleanup the deprecated methods in TimeRange > --- > > Key: HBASE-25235 > URL: https://issues.apache.org/jira/browse/HBASE-25235 > Project: HBase > Issue Type: Sub-task > Components: API, Client >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] virajjasani closed pull request #2624: HBASE-25245 : Fixing incorrect maven and jdk names for generate-hbase-website
virajjasani closed pull request #2624: URL: https://github.com/apache/hbase/pull/2624 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2624: HBASE-25245 : Fixing incorrect maven and jdk names for generate-hbase-website
Apache-HBase commented on pull request #2624: URL: https://github.com/apache/hbase/pull/2624#issuecomment-721651375 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2452: HBASE-25071 ReplicationServer support start ReplicationSource internal
Apache-HBase commented on pull request #2452: URL: https://github.com/apache/hbase/pull/2452#issuecomment-721650905 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 10s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-24666 Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 49s | HBASE-24666 passed | | +1 :green_heart: | compile | 1m 46s | HBASE-24666 passed | | +1 :green_heart: | shadedjars | 7m 17s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | HBASE-24666 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 48s | the patch passed | | +1 :green_heart: | compile | 1m 47s | the patch passed | | +1 :green_heart: | javac | 1m 47s | the patch passed | | +1 :green_heart: | shadedjars | 7m 11s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 48s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 48s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 210m 4s | hbase-server in the patch passed. | | | | 241m 47s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.13 Server=19.03.13 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/11/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2452 | | Optional Tests | unit javac javadoc shadedjars compile | | uname | Linux bdd0029df15d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-24666 / f67c3dfc5a | | Default Java | 1.8.0_232 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/11/testReport/ | | Max. process+thread count | 3559 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/11/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #2623: HBASE-25240 gson format of RpcServer.logResponse is abnormal
virajjasani commented on a change in pull request #2623: URL: https://github.com/apache/hbase/pull/2623#discussion_r517242016 ## File path: hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestGsonUtil.java ## @@ -0,0 +1,46 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.util; + +import static org.junit.Assert.assertEquals; + +import org.apache.hadoop.hbase.HBaseClassTestRule; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hbase.thirdparty.com.google.gson.Gson; +import org.junit.ClassRule; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +@Category({ MiscTests.class, SmallTests.class }) +public class TestGsonUtil { + + @ClassRule + public static final HBaseClassTestRule CLASS_RULE = +HBaseClassTestRule.forClass(TestGsonUtil.class); + + private static final Gson DHE_GSON = GsonUtil.createGsonWithDisableHtmlEscaping().create(); + + @Test + public void testDisableHtmlEscaping() { Review comment: @WenFeiYi Can you provide another unit test with the same example that you have posted on Jira? i.e ``` {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":0,"starttimems":"1604389131993","responsesize":"64","method":"Multi","param":"region\u003d test,,1604389129684.8812226d0f8942b24892c79e3c393b26., for 10 action(s) and 1st row key\u003d11","processingtimems":20,"client":"172.16.136.23:61264","queuetimems":0,"multi.servicecalls":0,"class":"MiniHBaseClusterRegionServer","multi.mutations":10} ``` And assert that `\u003d` is replaced by `=` with entire string message? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25216) The client zk syncer should deal with meta replica count change
[ https://issues.apache.org/jira/browse/HBASE-25216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17225968#comment-17225968 ] Duo Zhang commented on HBASE-25216: --- Pushed to master and branch-2. Let's wait for a while to see if it works for branch-2. > The client zk syncer should deal with meta replica count change > --- > > Key: HBASE-25216 > URL: https://issues.apache.org/jira/browse/HBASE-25216 > Project: HBase > Issue Type: Bug > Components: master, Zookeeper >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > The failure of TestSeparateClientZKCluster is because that, we start the zk > syncer before we initialize meta region, and after HBASE-25099, we will scan > zookeeper to get the meta znodes directly instead of checking the config, so > we will get an empty list since there is no meta location on zk yet, thus we > will sync nothing. > But changing the order can not solve everything, as after HBASE-25099, we can > change the meta replica count without restartinig master, so the zk syncer > should have the ability to know the change and start to sync the location for > the new replicas. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2624: HBASE-25245 : Fixing incorrect maven and jdk names for generate-hbase-website
Apache-HBase commented on pull request #2624: URL: https://github.com/apache/hbase/pull/2624#issuecomment-721647347 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for branch | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 8s | Maven dependency ordering for patch | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | ||| _ Other Tests _ | | +0 :ok: | asflicense | 0m 0s | ASF License check generated no output? | | | | 2m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2624/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2624 | | Optional Tests | dupname asflicense | | uname | Linux edb8787aa1f3 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 49774c7e18 | | Max. process+thread count | 50 (vs. ulimit of 3) | | modules | C: U: | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2624/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25216) The client zk syncer should deal with meta replica count change
[ https://issues.apache.org/jira/browse/HBASE-25216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25216: -- Fix Version/s: 2.4.0 3.0.0-alpha-1 > The client zk syncer should deal with meta replica count change > --- > > Key: HBASE-25216 > URL: https://issues.apache.org/jira/browse/HBASE-25216 > Project: HBase > Issue Type: Bug > Components: master, Zookeeper >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > The failure of TestSeparateClientZKCluster is because that, we start the zk > syncer before we initialize meta region, and after HBASE-25099, we will scan > zookeeper to get the meta znodes directly instead of checking the config, so > we will get an empty list since there is no meta location on zk yet, thus we > will sync nothing. > But changing the order can not solve everything, as after HBASE-25099, we can > change the meta replica count without restartinig master, so the zk syncer > should have the ability to know the change and start to sync the location for > the new replicas. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] virajjasani opened a new pull request #2624: HBASE-25245 : Fixing incorrect maven and jdk names for generate-hbase-website
virajjasani opened a new pull request #2624: URL: https://github.com/apache/hbase/pull/2624 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-25245) hbase_generate_website is failing due to incorrect jdk and maven syntax
Viraj Jasani created HBASE-25245: Summary: hbase_generate_website is failing due to incorrect jdk and maven syntax Key: HBASE-25245 URL: https://issues.apache.org/jira/browse/HBASE-25245 Project: HBase Issue Type: Task Reporter: Viraj Jasani Assignee: Viraj Jasani While waiting for HBase download page to reflect new release and during an offline syncup with [~ndimiduk] , realized that generate website is failing for quite some time now, e.g https://ci-hadoop.apache.org/job/HBase/job/hbase_generate_website/80/ {code:java} Obtained dev-support/jenkins-scripts/generate-hbase-website.Jenkinsfile from git https://gitbox.apache.org/repos/asf/hbase.git Running in Durability level: PERFORMANCE_OPTIMIZED org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: WorkflowScript: 40: Tool type "maven" does not have an install of "Maven (latest)" configured - did you mean "maven_latest"? @ line 40, column 15. maven 'Maven (latest)' ^ WorkflowScript: 42: Tool type "jdk" does not have an install of "JDK 1.8 (latest)" configured - did you mean "jdk_1.8_latest"? @ line 42, column 13. jdk "JDK 1.8 (latest)" ^ {code} We might have to apply fix similar to HBASE-25204 to generate website specific Jenkinsfile. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25244) Support splitting a region into N parts at a time
zhuqi created HBASE-25244: - Summary: Support splitting a region into N parts at a time Key: HBASE-25244 URL: https://issues.apache.org/jira/browse/HBASE-25244 Project: HBase Issue Type: New Feature Components: regionserver Reporter: zhuqi Assignee: zhuqi In the current reference file format, only one parent region split into two references can be recorded. At this time, if you want to continue splitting the daughter region, you must wait until the majorCompaction is over and the reference file is deleted before you can continue to split the region. If the reference file can point to other refenrence files, that is, the data has not been moved from the parentRegion to the region under the corresponding folder, thereby establishing a multi-level reference. At this time, a tree structure is formed. Only the root contains physical data, and the region on the leaf node region is serving. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25243) Support lazy loading operation for editing tableSchema config
[ https://issues.apache.org/jira/browse/HBASE-25243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuqi updated HBASE-25243: -- Description: Currently, when “alter” is used to modify the tableSchema, the region will be reopened immediately to load the meta information in .tabledesc. Now add a lazy loading modification command, there is no need to reopen the region immediately, and then load it when a region move or other region behavior occurs. (was: Currently, when “alter” is used to modify the tablesschema, the region will be reopened immediately to load the meta information in .tabledesc. Now add a lazy loading modification command, there is no need to reopen the region immediately, and then load it when a region move or other region behavior occurs.) > Support lazy loading operation for editing tableSchema config > -- > > Key: HBASE-25243 > URL: https://issues.apache.org/jira/browse/HBASE-25243 > Project: HBase > Issue Type: New Feature > Components: regionserver >Reporter: zhuqi >Assignee: zhuqi >Priority: Major > > Currently, when “alter” is used to modify the tableSchema, the region will be > reopened immediately to load the meta information in .tabledesc. Now add a > lazy loading modification command, there is no need to reopen the region > immediately, and then load it when a region move or other region behavior > occurs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25243) Support lazy loading operation for editing tableSchema config
[ https://issues.apache.org/jira/browse/HBASE-25243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuqi updated HBASE-25243: -- Summary: Support lazy loading operation for editing tableSchema config (was: Support Lazy loading operation for editing tableSchema config) > Support lazy loading operation for editing tableSchema config > -- > > Key: HBASE-25243 > URL: https://issues.apache.org/jira/browse/HBASE-25243 > Project: HBase > Issue Type: New Feature > Components: regionserver >Reporter: zhuqi >Assignee: zhuqi >Priority: Major > > Currently, when “alter” is used to modify the tablesschema, the region will > be reopened immediately to load the meta information in .tabledesc. Now add a > lazy loading modification command, there is no need to reopen the region > immediately, and then load it when a region move or other region behavior > occurs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25243) Support Lazy loading operation for editing tableSchema config
zhuqi created HBASE-25243: - Summary: Support Lazy loading operation for editing tableSchema config Key: HBASE-25243 URL: https://issues.apache.org/jira/browse/HBASE-25243 Project: HBase Issue Type: New Feature Components: regionserver Reporter: zhuqi Assignee: zhuqi Currently, when “alter” is used to modify the tablesschema, the region will be reopened immediately to load the meta information in .tabledesc. Now add a lazy loading modification command, there is no need to reopen the region immediately, and then load it when a region move or other region behavior occurs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 commented on pull request #2563: HBASE-25200 Try enlarge the flaky test timeout for branch-2.2
Apache9 commented on pull request #2563: URL: https://github.com/apache/hbase/pull/2563#issuecomment-721634157 Seems we are still in a bad situation for branch-2.2? The flaky list is very long... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25210) RegionInfo.isOffline is now a duplication with RegionInfo.isSplit
[ https://issues.apache.org/jira/browse/HBASE-25210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17225958#comment-17225958 ] Duo Zhang commented on HBASE-25210: --- Do you think we should backport this to branch-2 too? [~stack] > RegionInfo.isOffline is now a duplication with RegionInfo.isSplit > - > > Key: HBASE-25210 > URL: https://issues.apache.org/jira/browse/HBASE-25210 > Project: HBase > Issue Type: Improvement > Components: meta >Reporter: Duo Zhang >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1 > > > The only place, where we set it to true is in splitRegion, and at the same > time we will set split to true. > So in general, I suggest that we deprecated isOffline and isSplitParent in > RegionInfo, only leave the isSplit method. And in RegionInfoBuilder, we > deprecated setOffline and only leave the setSplit method. > This could make our code base cleaner. > And for serialization compatibility, we'd better still keep the split and > offline fields in the actual RegionInfo datastructure for a while. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25210) RegionInfo.isOffline is now a duplication with RegionInfo.isSplit
[ https://issues.apache.org/jira/browse/HBASE-25210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25210: -- Component/s: meta > RegionInfo.isOffline is now a duplication with RegionInfo.isSplit > - > > Key: HBASE-25210 > URL: https://issues.apache.org/jira/browse/HBASE-25210 > Project: HBase > Issue Type: Improvement > Components: meta >Reporter: Duo Zhang >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1 > > > The only place, where we set it to true is in splitRegion, and at the same > time we will set split to true. > So in general, I suggest that we deprecated isOffline and isSplitParent in > RegionInfo, only leave the isSplit method. And in RegionInfoBuilder, we > deprecated setOffline and only leave the setSplit method. > This could make our code base cleaner. > And for serialization compatibility, we'd better still keep the split and > offline fields in the actual RegionInfo datastructure for a while. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2602: HBASE-25229 Instantiate BucketCache before RSs create their ephemeral nodes
Apache-HBase commented on pull request #2602: URL: https://github.com/apache/hbase/pull/2602#issuecomment-721631726 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 6m 51s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ branch-1 Compile Tests _ | | +1 :green_heart: | mvninstall | 9m 50s | branch-1 passed | | +1 :green_heart: | compile | 0m 42s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | compile | 0m 44s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | checkstyle | 1m 45s | branch-1 passed | | +1 :green_heart: | shadedjars | 3m 3s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 47s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 0m 41s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +0 :ok: | spotbugs | 3m 2s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 0s | branch-1 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 56s | the patch passed | | +1 :green_heart: | compile | 0m 40s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javac | 0m 40s | the patch passed | | +1 :green_heart: | compile | 0m 46s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | javac | 0m 46s | the patch passed | | -1 :x: | checkstyle | 1m 34s | hbase-server: The patch generated 5 new + 65 unchanged - 0 fixed = 70 total (was 65) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 2m 51s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 4m 36s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2. | | +1 :green_heart: | javadoc | 0m 31s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 0m 42s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | findbugs | 2m 54s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 95m 32s | hbase-server in the patch failed. | | +1 :green_heart: | asflicense | 0m 35s | The patch does not generate ASF License warnings. | | | | 143m 40s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2602/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2602 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 3b3cec81b64f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2602/out/precommit/personality/provided.sh | | git revision | branch-1 / 6626cc1 | | Default Java | Azul Systems, Inc.-1.7.0_272-b10 | | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2602/3/artifact/out/diff-checkstyle-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2602/3/artifact/out/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2602/3/testReport/ | | Max. process+thread count | 3385 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2602/3/console | | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To re
[GitHub] [hbase] Apache9 merged pull request #2614: HBASE-25216 The client zk syncer should deal with meta replica count …
Apache9 merged pull request #2614: URL: https://github.com/apache/hbase/pull/2614 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24210) Add Increment, Append and CheckAndMutate support to RowMutations
[ https://issues.apache.org/jira/browse/HBASE-24210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17225947#comment-17225947 ] Duo Zhang commented on HBASE-24210: --- Let's do this step by step. > Add Increment, Append and CheckAndMutate support to RowMutations > > > Key: HBASE-24210 > URL: https://issues.apache.org/jira/browse/HBASE-24210 > Project: HBase > Issue Type: New Feature >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > Currently, RowMutations supports only Put and Delete. Supporting Increment, > Append and CheckAndMutate in RowMutations would be helpful for some use cases. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2452: HBASE-25071 ReplicationServer support start ReplicationSource internal
Apache-HBase commented on pull request #2452: URL: https://github.com/apache/hbase/pull/2452#issuecomment-721616850 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-24666 Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 20s | HBASE-24666 passed | | +1 :green_heart: | compile | 2m 7s | HBASE-24666 passed | | +1 :green_heart: | shadedjars | 6m 46s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 55s | HBASE-24666 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 9s | the patch passed | | +1 :green_heart: | compile | 2m 7s | the patch passed | | +1 :green_heart: | javac | 2m 7s | the patch passed | | +1 :green_heart: | shadedjars | 6m 54s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 54s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 57s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 142m 39s | hbase-server in the patch passed. | | | | 175m 7s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.13 Server=19.03.13 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/11/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2452 | | Optional Tests | unit javac javadoc shadedjars compile | | uname | Linux a7eadd63d83d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-24666 / f67c3dfc5a | | Default Java | 2020-01-14 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/11/testReport/ | | Max. process+thread count | 4004 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/11/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2623: HBASE-25240 gson format of RpcServer.logResponse is abnormal
Apache-HBase commented on pull request #2623: URL: https://github.com/apache/hbase/pull/2623#issuecomment-721610579 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 38s | master passed | | +1 :green_heart: | compile | 1m 19s | master passed | | +1 :green_heart: | shadedjars | 6m 39s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 2s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 31s | the patch passed | | +1 :green_heart: | compile | 1m 21s | the patch passed | | +1 :green_heart: | javac | 1m 21s | the patch passed | | +1 :green_heart: | shadedjars | 6m 35s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 59s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 25s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 146m 53s | hbase-server in the patch passed. | | | | 176m 48s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2623 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 7ae845b6db78 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / f37cd05c32 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/3/testReport/ | | Max. process+thread count | 4748 (vs. ulimit of 3) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2623: HBASE-25240 gson format of RpcServer.logResponse is abnormal
Apache-HBase commented on pull request #2623: URL: https://github.com/apache/hbase/pull/2623#issuecomment-721603965 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 10s | master passed | | +1 :green_heart: | compile | 1m 31s | master passed | | +1 :green_heart: | shadedjars | 6m 45s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 5s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 5s | the patch passed | | +1 :green_heart: | compile | 1m 31s | the patch passed | | +1 :green_heart: | javac | 1m 31s | the patch passed | | +1 :green_heart: | shadedjars | 6m 41s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 5s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 42s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 132m 33s | hbase-server in the patch passed. | | | | 164m 55s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2623 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d82018cb1391 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / f37cd05c32 | | Default Java | AdoptOpenJDK-11.0.6+10 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/3/testReport/ | | Max. process+thread count | 3895 (vs. ulimit of 3) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] carp84 commented on a change in pull request #2614: HBASE-25216 The client zk syncer should deal with meta replica count …
carp84 commented on a change in pull request #2614: URL: https://github.com/apache/hbase/pull/2614#discussion_r517164251 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSeparateClientZKCluster.java ## @@ -255,7 +260,20 @@ public void testAsyncTable() throws Exception { Get get = new Get(row); Result result = table.get(get).get(); LOG.debug("Result: " + Bytes.toString(result.getValue(family, qualifier))); - Assert.assertArrayEquals(value, result.getValue(family, qualifier)); + assertArrayEquals(value, result.getValue(family, qualifier)); +} + } + + @Test + public void testChangeMetaReplicaCount() throws Exception { Review comment: ok, got it, makes sense. Thanks for the clarification. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ddupg commented on a change in pull request #2452: HBASE-25071 ReplicationServer support start ReplicationSource internal
ddupg commented on a change in pull request #2452: URL: https://github.com/apache/hbase/pull/2452#discussion_r517163741 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java ## @@ -224,6 +226,35 @@ public void init(Configuration conf, FileSystem fs, Path walDir, this.abortOnError = this.conf.getBoolean("replication.source.regionserver.abort", true); +if (conf.getBoolean(HConstants.REPLICATION_OFFLOAD_ENABLE_KEY, + HConstants.REPLICATION_OFFLOAD_ENABLE_DEFAULT)) { + fetchWALsThread = new Thread(() -> { Review comment: interrrupt the `fetchWALsThread` when the ReplicationServer exits or when the peer terminates? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a change in pull request #2614: HBASE-25216 The client zk syncer should deal with meta replica count …
Apache9 commented on a change in pull request #2614: URL: https://github.com/apache/hbase/pull/2614#discussion_r517161164 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSeparateClientZKCluster.java ## @@ -255,7 +260,20 @@ public void testAsyncTable() throws Exception { Get get = new Get(row); Result result = table.get(get).get(); LOG.debug("Result: " + Bytes.toString(result.getValue(family, qualifier))); - Assert.assertArrayEquals(value, result.getValue(family, qualifier)); + assertArrayEquals(value, result.getValue(family, qualifier)); +} + } + + @Test + public void testChangeMetaReplicaCount() throws Exception { Review comment: RegionLocator will go to ConnectionRegistry to ask for the meta locations and we have already set the ConnectionRegistry to ZKConnectionRegistry against the client zk in the setup method, so it is testing the ClientZKSyncer logics, you could try comment out the set data logic in ClientZKSyncer, the test will fail with timeout... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] carp84 commented on a change in pull request #2614: HBASE-25216 The client zk syncer should deal with meta replica count …
carp84 commented on a change in pull request #2614: URL: https://github.com/apache/hbase/pull/2614#discussion_r517158312 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSeparateClientZKCluster.java ## @@ -255,7 +260,20 @@ public void testAsyncTable() throws Exception { Get get = new Get(row); Result result = table.get(get).get(); LOG.debug("Result: " + Bytes.toString(result.getValue(family, qualifier))); - Assert.assertArrayEquals(value, result.getValue(family, qualifier)); + assertArrayEquals(value, result.getValue(family, qualifier)); +} + } + + @Test + public void testChangeMetaReplicaCount() throws Exception { Review comment: Yes, and it seems the test now didn't test any `ClientZKSyncer` logics, like if the client could still correctly access meta through client ZK, but checks against meta region locator? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2614: HBASE-25216 The client zk syncer should deal with meta replica count …
Apache-HBase commented on pull request #2614: URL: https://github.com/apache/hbase/pull/2614#issuecomment-721575234 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 20s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 15s | master passed | | +1 :green_heart: | compile | 1m 16s | master passed | | +1 :green_heart: | shadedjars | 8m 26s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 53s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 46s | the patch passed | | +1 :green_heart: | compile | 1m 18s | the patch passed | | +1 :green_heart: | javac | 1m 18s | the patch passed | | +1 :green_heart: | shadedjars | 8m 23s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 218m 15s | hbase-server in the patch passed. | | | | 253m 42s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2614/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2614 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux e3f53232920a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / f37cd05c32 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2614/4/testReport/ | | Max. process+thread count | 3815 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2614/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org