[jira] [Commented] (HBASE-20526) multithreads bulkload performance
[ https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903473#comment-16903473 ] HBase QA commented on HBASE-20526: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-1.3 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 15s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_222 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_232 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 29s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_222 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_232 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed with JDK v1.7.0_232 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 11s{color} | {color:red} hbase-server: The patch generated 6 new + 25 unchanged - 0 fixed = 31 total (was 25) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 20s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.5 2.7.7. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed with JDK v1.7.0_232 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 48s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}119m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery | | | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFilesSplitRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/PreCommit-HBASE-Build/742/artifact/patchprocess/Dockerfile | | JIRA Issue | HBASE-20526 | | JIRA Patch URL | https://issues.apache.org/jira/secure/
[jira] [Commented] (HBASE-20526) multithreads bulkload performance
[ https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16472625#comment-16472625 ] Ted Yu commented on HBASE-20526: Test failure seems to be related to patch: {code} [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 174.114 s <<< FAILURE! - in org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas [ERROR] testRestoreSnapshotAfterSplit(org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas) Time elapsed: 17.969 s <<< ERROR! org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=testRestoreSnapshotAfterSplit-snap table=testtb-1525617745131 type=FLUSH } had an error. Procedure testRestoreSnapshotAfterSplit-snap { waiting=[] done=[] } at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:354) at org.apache.hadoop.hbase.master.MasterRpcServices.isSnapshotDone(MasterRpcServices.java:1030) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58585) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2349) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168) Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException via Failed taking snapshot { ss=testRestoreSnapshotAfterSplit-snap table=testtb-1525617745131 type=FLUSH } due to exception:Manifest region info {ENCODED => 7e089fb3359462cea181a607beba185a, NAME => 'testtb-1525617745131,8,1525617745132_0002.7e089fb3359462cea181a607beba185a.', STARTKEY => '8', ENDKEY => '', OFFLINE => true, SPLIT => true, REPLICA_ID => 2}doesn't match expected region:{ENCODED => 9ca827b8e313d4b010ae10accb02a970, NAME => 'testtb-1525617745131,8,1525617745132.9ca827b8e313d4b010ae10accb02a970.', STARTKEY => '8', ENDKEY => '', OFFLINE => true, SPLIT => true}:org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Manifest region info {ENCODED => 7e089fb3359462cea181a607beba185a, NAME => 'testtb-1525617745131,8,1525617745132_0002.7e089fb3359462cea181a607beba185a.', STARTKEY => '8', ENDKEY => '', OFFLINE => true, SPLIT => true, REPLICA_ID => 2}doesn't match expected region:{ENCODED => 9ca827b8e313d4b010ae10accb02a970, NAME => 'testtb-1525617745131,8,1525617745132.9ca827b8e313d4b010ae10accb02a970.', STARTKEY => '8', ENDKEY => '', OFFLINE => true, SPLIT => true} at org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83) at org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:315) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:344) ... 6 more {code} Please correct the test failure before posting patch for master. > multithreads bulkload performance > - > > Key: HBASE-20526 > URL: https://issues.apache.org/jira/browse/HBASE-20526 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Zookeeper >Affects Versions: 1.2.5, 1.3.2 > Environment: hbase-server-1.2.0-cdh5.12.1 > spark version 1.6 >Reporter: Key Hutu >Assignee: Key Hutu >Priority: Minor > Labels: performance > Fix For: 1.3.2 > > Attachments: HBASE-20526-branch-1.3.V1.patch > > Original Estimate: 96h > Remaining Estimate: 96h > > When doing bulkload , some interactive with zookeeper to getting region key > range may be cost more time. > In multithreads enviorment, the duration maybe cost 5 minute or more. > From the executor log, like 'Reading reply sessionid:0x262fb37f4a07080 , > packet:: clientPath:null server ...' contents appear many times. > > It likely to provide new method for bulkload, caching the key range outside > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20526) multithreads bulkload performance
[ https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465345#comment-16465345 ] Ted Yu commented on HBASE-20526: Every contribution, once accepted, goes to master branch first. > multithreads bulkload performance > - > > Key: HBASE-20526 > URL: https://issues.apache.org/jira/browse/HBASE-20526 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Zookeeper >Affects Versions: 1.2.5, 1.3.2 > Environment: hbase-server-1.2.0-cdh5.12.1 > spark version 1.6 >Reporter: Key Hutu >Assignee: Key Hutu >Priority: Minor > Labels: performance > Fix For: 1.3.2 > > Attachments: HBASE-20526-branch-1.3.V1.patch > > Original Estimate: 96h > Remaining Estimate: 96h > > When doing bulkload , some interactive with zookeeper to getting region key > range may be cost more time. > In multithreads enviorment, the duration maybe cost 5 minute or more. > From the executor log, like 'Reading reply sessionid:0x262fb37f4a07080 , > packet:: clientPath:null server ...' contents appear many times. > > It likely to provide new method for bulkload, caching the key range outside > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20526) multithreads bulkload performance
[ https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465343#comment-16465343 ] Key Hutu commented on HBASE-20526: -- Thanks , Ted Yu I see the implementation of master branch and branch-1.2/1.3 is different Can we accept submissions based on branch-1.3 ? > multithreads bulkload performance > - > > Key: HBASE-20526 > URL: https://issues.apache.org/jira/browse/HBASE-20526 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Zookeeper >Affects Versions: 1.2.5, 1.3.2 > Environment: hbase-server-1.2.0-cdh5.12.1 > spark version 1.6 >Reporter: Key Hutu >Assignee: Key Hutu >Priority: Minor > Labels: performance > Fix For: 1.3.2 > > Attachments: HBASE-20526-branch-1.3.V1.patch > > Original Estimate: 96h > Remaining Estimate: 96h > > When doing bulkload , some interactive with zookeeper to getting region key > range may be cost more time. > In multithreads enviorment, the duration maybe cost 5 minute or more. > From the executor log, like 'Reading reply sessionid:0x262fb37f4a07080 , > packet:: clientPath:null server ...' contents appear many times. > > It likely to provide new method for bulkload, caching the key range outside > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20526) multithreads bulkload performance
[ https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465191#comment-16465191 ] Hadoop QA commented on HBASE-20526: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-1.3 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 6s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_181 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 37s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_181 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 20s{color} | {color:red} hbase-server: The patch generated 6 new + 25 unchanged - 0 fixed = 31 total (was 25) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 28s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 40s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.5 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 45s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}134m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFilesSplitRecovery | | | hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery | | | hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:dca6535 | | JIRA Issue | HBASE-20526 | | JIRA Patch URL | https://issues.apache.org/
[jira] [Commented] (HBASE-20526) multithreads bulkload performance
[ https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465143#comment-16465143 ] Ted Yu commented on HBASE-20526: The change makes sense. {code} 382 public void doBulkLoad(Path hfofDir, final Admin admin, Table table, 383 final Pair startEndKeys) throws TableNotFoundException, IOException { {code} Please complete the javadoc for parameters. > multithreads bulkload performance > - > > Key: HBASE-20526 > URL: https://issues.apache.org/jira/browse/HBASE-20526 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Zookeeper >Affects Versions: 1.2.5, 1.3.2 > Environment: hbase-server-1.2.0-cdh5.12.1 > spark version 1.6 >Reporter: Key Hutu >Assignee: Key Hutu >Priority: Minor > Labels: performance > Fix For: 1.3.2 > > Attachments: HBASE-20526-branch-1.3.V1.patch > > Original Estimate: 96h > Remaining Estimate: 96h > > When doing bulkload , some interactive with zookeeper to getting region key > range may be cost more time. > In multithreads enviorment, the duration maybe cost 5 minute or more. > From the executor log, like 'Reading reply sessionid:0x262fb37f4a07080 , > packet:: clientPath:null server ...' contents appear many times. > > It likely to provide new method for bulkload, caching the key range outside > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20526) multithreads bulkload performance
[ https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465144#comment-16465144 ] Ted Yu commented on HBASE-20526: Please base the next patch on master branch. > multithreads bulkload performance > - > > Key: HBASE-20526 > URL: https://issues.apache.org/jira/browse/HBASE-20526 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Zookeeper >Affects Versions: 1.2.5, 1.3.2 > Environment: hbase-server-1.2.0-cdh5.12.1 > spark version 1.6 >Reporter: Key Hutu >Assignee: Key Hutu >Priority: Minor > Labels: performance > Fix For: 1.3.2 > > Attachments: HBASE-20526-branch-1.3.V1.patch > > Original Estimate: 96h > Remaining Estimate: 96h > > When doing bulkload , some interactive with zookeeper to getting region key > range may be cost more time. > In multithreads enviorment, the duration maybe cost 5 minute or more. > From the executor log, like 'Reading reply sessionid:0x262fb37f4a07080 , > packet:: clientPath:null server ...' contents appear many times. > > It likely to provide new method for bulkload, caching the key range outside > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20526) multithreads bulkload performance
[ https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464628#comment-16464628 ] Key Hutu commented on HBASE-20526: -- In the application, doBulkload(hpath, admin, table, regionLocator) method called. To ensure real-time performance, many small files at higher frequencies was loaded > multithreads bulkload performance > - > > Key: HBASE-20526 > URL: https://issues.apache.org/jira/browse/HBASE-20526 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Zookeeper >Affects Versions: 1.2.0 > Environment: hbase-server-1.2.0-cdh5.12.1 > spark version 1.6 >Reporter: Key Hutu >Assignee: Key Hutu >Priority: Minor > Labels: performance > Original Estimate: 96h > Remaining Estimate: 96h > > When doing bulkload , some interactive with zookeeper to getting region key > range may be cost more time. > In multithreads enviorment, the duration maybe cost 5 minute or more. > From the executor log, like 'Reading reply sessionid:0x262fb37f4a07080 , > packet:: clientPath:null server ...' contents appear many times. > > It likely to provide new method for bulkload, caching the key range outside > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20526) multithreads bulkload performance
[ https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464627#comment-16464627 ] Key Hutu commented on HBASE-20526: -- Thank you for your attention, Ted Yu the executor log like this {panel:title=executor stderr} 2018-05-05 12:19:41,948- WARN -330831[Executor task launch worker for task 187159]-(HBaseConfiguration.java:195)-Config option "hbase.regionserver.lease.period" is deprecated. Instead, use "hbase.client.scanner.timeout.period" 2018-05-05 12:19:41,948-DEBUG -330831[Executor task launch worker for task 187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null finished:false header:: 199,8 replyHeader:: 199,197642441638,0 request:: '/hbase,F response:: v{'replication,'schema,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'region-in-transition,'online-snapshot,'master,'running,'balancer,'recovering-regions,'draining,'namespace,'hbaseid,'table} 2018-05-05 12:19:41,949-DEBUG -330832[Executor task launch worker for task 187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null finished:false header:: 200,4 replyHeader:: 200,197642441638,0 request:: '/hbase/meta-region-server,F response:: #0001a726567696f6e7365727665723a3630303230ffb6ffac57ffadff80ff80ffa8b50425546a17aa686f73742d382d31323810fff4ffd4318ffd2ff8affe7ffd9ffaf2c100183,s{197568498964,197568498964,1524633515423,1524633515423,0,0,0,0,64,0,197568498964} 2018-05-05 12:19:41,950-DEBUG -330833[Executor task launch worker for task 187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null finished:false header:: 201,8 replyHeader:: 201,197642441638,0 request:: '/hbase,F response:: v{'replication,'schema,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'region-in-transition,'online-snapshot,'master,'running,'balancer,'recovering-regions,'draining,'namespace,'hbaseid,'table} 2018-05-05 12:19:41,950-DEBUG -330833[Executor task launch worker for task 187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null finished:false header:: 202,4 replyHeader:: 202,197642441638,0 request:: '/hbase/meta-region-server,F response:: #0001a726567696f6e7365727665723a3630303230ffb6ffac57ffadff80ff80ffa8b50425546a17aa686f73742d382d31323810fff4ffd4318ffd2ff8affe7ffd9ffaf2c100183,s{197568498964,197568498964,1524633515423,1524633515423,0,0,0,0,64,0,197568498964} 2018-05-05 12:19:41,950-DEBUG -330833[Executor task launch worker for task 187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null finished:false header:: 203,8 replyHeader:: 203,197642441638,0 request:: '/hbase,F response:: v{'replication,'schema,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'region-in-transition,'online-snapshot,'master,'running,'balancer,'recovering-regions,'draining,'namespace,'hbaseid,'table} 2018-05-05 12:19:41,951-DEBUG -330834[Executor task launch worker for task 187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null finished:false header:: 204,4 replyHeader:: 204,197642441638,0 request:: '/hbase/meta-region-server,F response:: #0001a726567696f6e7365727665723a3630303230ffb6ffac57ffadff80ff80ffa8b50425546a17aa686f73742d382d31323810fff4ffd4318ffd2ff8affe7ffd9ffaf2c100183,s{197568498964,197568498964,1524633515423,1524633515423,0,0,0,0,64,0,197568498964} 2018-05-05 12:19:42,002-DEBUG -330885[Executor task launch worker for task 201898]-(TaskMemoryManager.java:221)-Task 201898 acquired 256.0 KB for org.apache.spark.shuffle.sort.ShuffleExternalSorter@18f196e 2018-05-05 12:19:42,003-DEBUG -330886[Executor task launch worker for task 201898]-(TaskMemoryManager.java:230)-Task 201898 release 128.0 KB from org.apache.spark.shuffle.sort.ShuffleExternalSorter@18f196e 2018-05-05 12:19:42,053-DEBUG -330936[Executor task launch worker for task 187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null finished:false header:: 205,8 replyHeader:: 205,197642441638,0 request:: '/hbase,F response:: v{'replication,'schema,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'region-in-transition,'online-snapshot,'master,'running,'balancer,'recovering-
[jira] [Commented] (HBASE-20526) multithreads bulkload performance
[ https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462590#comment-16462590 ] Ted Yu commented on HBASE-20526: Can you attach the related portion of executor log so that we have more idea of the issue you're optimizing ? Thanks > multithreads bulkload performance > - > > Key: HBASE-20526 > URL: https://issues.apache.org/jira/browse/HBASE-20526 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Zookeeper >Affects Versions: 1.2.0 > Environment: hbase-server-1.2.0-cdh5.12.1 > spark version 1.6 >Reporter: Key Hutu >Assignee: Key Hutu >Priority: Minor > Labels: performance > Original Estimate: 96h > Remaining Estimate: 96h > > When doing bulkload , some interactive with zookeeper to getting region key > range may be cost more time. > In multithreads enviorment, the duration maybe cost 5 minute or more. > From the executor log, like 'Reading reply sessionid:0x262fb37f4a07080 , > packet:: clientPath:null server ...' contents appear many times. > > It likely to provide new method for bulkload, caching the key range outside > -- This message was sent by Atlassian JIRA (v7.6.3#76005)