[GitHub] [hadoop] hadoop-yetus commented on issue #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x
hadoop-yetus commented on issue #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x URL: https://github.com/apache/hadoop/pull/763#issuecomment-486073874 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for branch | | +1 | mvninstall | 1145 | trunk passed | | +1 | compile | 987 | trunk passed | | +1 | checkstyle | 146 | trunk passed | | +1 | mvnsite | 295 | trunk passed | | +1 | shadedclient | 1219 | branch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-project | | +1 | findbugs | 384 | trunk passed | | +1 | javadoc | 249 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 42 | Maven dependency ordering for patch | | -1 | mvninstall | 21 | hadoop-hdfs-httpfs in the patch failed. | | -1 | mvninstall | 23 | hadoop-hdfs-rbf in the patch failed. | | -1 | compile | 281 | root in the patch failed. | | -1 | javac | 281 | root in the patch failed. | | -0 | checkstyle | 152 | root: The patch generated 12 new + 254 unchanged - 14 fixed = 266 total (was 268) | | -1 | mvnsite | 24 | hadoop-hdfs-httpfs in the patch failed. | | -1 | mvnsite | 24 | hadoop-hdfs-rbf in the patch failed. | | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 | xml | 7 | The patch has no ill-formed XML file. | | -1 | shadedclient | 194 | patch has errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-project | | -1 | findbugs | 45 | hadoop-common-project/hadoop-kms generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -1 | findbugs | 23 | hadoop-hdfs-httpfs in the patch failed. | | -1 | findbugs | 25 | hadoop-hdfs-rbf in the patch failed. | | -1 | javadoc | 18 | hadoop-hdfs-project_hadoop-hdfs-httpfs generated 4 new + 5 unchanged - 0 fixed = 9 total (was 5) | | -1 | javadoc | 33 | hadoop-hdfs-project_hadoop-hdfs-rbf generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 14 | hadoop-project in the patch passed. | | +1 | unit | 539 | hadoop-common in the patch passed. | | +1 | unit | 202 | hadoop-kms in the patch passed. | | -1 | unit | 1374 | hadoop-hdfs in the patch failed. | | -1 | unit | 23 | hadoop-hdfs-httpfs in the patch failed. | | -1 | unit | 26 | hadoop-hdfs-rbf in the patch failed. | | -1 | asflicense | 29 | The patch generated 1 ASF License warnings. | | | | 8157 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-common-project/hadoop-kms | | | Dead store to requestURL in org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At KMS.java:org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At KMS.java:[line 181] | | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport | | | hadoop.hdfs.server.namenode.TestQuotaWithStripedBlocks | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain | | | hadoop.hdfs.TestBlockStoragePolicy | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.web.TestWebHDFSForHA | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | | | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | | hadoop.hdfs.TestMultipleNNPortQOP | | | hadoop.fs.viewfs.TestViewFileSystemWithAcls | | | hadoop.hdfs.server.datanode.TestDataNodeECN | | | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction | | | hadoop.hdfs.web.TestWebHdfsUrl | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.TestGenericRefresh | | | hadoop.hdfs.server.namenode.TestFSEditLogLoader | | | hadoop.hdfs.server.namenode.TestEditLogJournalFailures | | | hadoop.fs.contract.hdfs.TestHDFSContractSeek | | | hadoop.hdfs.TestPread | | | hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData | | | hadoop.fs.viewfs.TestViewFsFileStatusHdfs | | |
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x
hadoop-yetus commented on a change in pull request #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x URL: https://github.com/apache/hadoop/pull/763#discussion_r277959772 ## File path: hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java ## @@ -57,11 +57,10 @@ public ParametersProvider(String driverParam, Class enumClass, } @Override - @SuppressWarnings("unchecked") - public Parameters getValue(HttpContext httpContext) { + public Parameters provide() { Map>> map = new HashMap>>(); -Map> queryString = - httpContext.getRequest().getQueryParameters(); + Review comment: whitespace:end of line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests
[ https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824809#comment-16824809 ] Siyao Meng commented on HADOOP-16264: - [~Steven Rand] Yep I did one run of unit tests on trunk before, turns out there are a lot more maven projects containing failures than branch-3.1.2 but I didn't look into the detail. That could be my environment's problem though. I was running Ubuntu 16.04 + OpenJDK 11.0.1 at that time. This jira is, at this moment, focused on tracking failed unit tests with JDK 11 on branch-3.1.2, which is more stable than branch-3.2 IMHO. This will automatically include some unresolved ones in trunk, like HADOOP-16115. I will probably track failures from new features in branch-3.2/trunk later. > [JDK11] Track failing Hadoop unit tests > --- > > Key: HADOOP-16264 > URL: https://issues.apache.org/jira/browse/HADOOP-16264 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.2 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: test-run1.tgz > > > Although there are still a lot of work before we could compile Hadoop with > JDK 11 (HADOOP-15338), it is possible to compile Hadoop with JDK 8 and run > (e.g. HDFS NN/DN,YARN NM/RM) on JDK 11 at this moment. > But after compiling branch-3.1.2 with JDK 8, I ran unit tests with JDK 11 and > there are a LOT of unit test failures (44 out of 96 maven projects contain at > least one unit test failures according to maven reactor summary). This may > well indicate some functionalities are actually broken on JDK 11. Some of > them already have a jira number. Some of them might have been fixed in 3.2.0. > Some of them might share the same root cause. > By definition, this jira should be part of HADOOP-15338. But the goal of this > one is just to keep track of unit test failures and (hopefully) resolve all > of them soon. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ajayydv merged pull request #757: HDDS-1450. Fix nightly run failures after HDDS-976. Contributed by Xi…
ajayydv merged pull request #757: HDDS-1450. Fix nightly run failures after HDDS-976. Contributed by Xi… URL: https://github.com/apache/hadoop/pull/757 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ajayydv commented on issue #757: HDDS-1450. Fix nightly run failures after HDDS-976. Contributed by Xi…
ajayydv commented on issue #757: HDDS-1450. Fix nightly run failures after HDDS-976. Contributed by Xi… URL: https://github.com/apache/hadoop/pull/757#issuecomment-486064649 +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests
[ https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824803#comment-16824803 ] Siyao Meng commented on HADOOP-16264: - [~giovanni.fumarola] Sure. I am running with -Dsurefire.printSummary on branch-3.1.2 + HADOOP-12760 + *HADOOP-15775* + *HADOOP-16016*. Hopefully this will give me a list of failed/erred unit tests. Will upload the result as I finish the run. > [JDK11] Track failing Hadoop unit tests > --- > > Key: HADOOP-16264 > URL: https://issues.apache.org/jira/browse/HADOOP-16264 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.2 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: test-run1.tgz > > > Although there are still a lot of work before we could compile Hadoop with > JDK 11 (HADOOP-15338), it is possible to compile Hadoop with JDK 8 and run > (e.g. HDFS NN/DN,YARN NM/RM) on JDK 11 at this moment. > But after compiling branch-3.1.2 with JDK 8, I ran unit tests with JDK 11 and > there are a LOT of unit test failures (44 out of 96 maven projects contain at > least one unit test failures according to maven reactor summary). This may > well indicate some functionalities are actually broken on JDK 11. Some of > them already have a jira number. Some of them might have been fixed in 3.2.0. > Some of them might share the same root cause. > By definition, this jira should be part of HADOOP-15338. But the goal of this > one is just to keep track of unit test failures and (hopefully) resolve all > of them soon. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15984) Update jersey from 1.19 to 2.x
[ https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-15984: -- Assignee: (was: Akira Ajisaka) > Update jersey from 1.19 to 2.x > -- > > Key: HADOOP-15984 > URL: https://issues.apache.org/jira/browse/HADOOP-15984 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Priority: Critical > > jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work stopped] (HADOOP-15984) Update jersey from 1.19 to 2.x
[ https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-15984 stopped by Akira Ajisaka. -- > Update jersey from 1.19 to 2.x > -- > > Key: HADOOP-15984 > URL: https://issues.apache.org/jira/browse/HADOOP-15984 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Critical > > jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x
[ https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824790#comment-16824790 ] Akira Ajisaka commented on HADOOP-15984: Created a work-in-progress pull request. https://github.com/apache/hadoop/pull/763 Now I'm working for HADOOP-16206 and don't have time to continue the work. Please feel free to take over the work starting with the PR. > Update jersey from 1.19 to 2.x > -- > > Key: HADOOP-15984 > URL: https://issues.apache.org/jira/browse/HADOOP-15984 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Critical > > jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka opened a new pull request #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x
aajisaka opened a new pull request #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x URL: https://github.com/apache/hadoop/pull/763 - [ ] hadoop-common - [ ] hadoop-hdfs - [ ] hadoop-mapreduce - [ ] hadoop-yarn - [ ] others This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer
[ https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824771#comment-16824771 ] Hadoop QA commented on HADOOP-16266: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 49m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 8m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 21s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 18s{color} | {color:orange} root: The patch generated 11 new + 307 unchanged - 6 fixed = 318 total (was 313) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 28s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}147m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 6s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}356m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ipc.TestProtoBufRpc | | | hadoop.hdfs.qjournal.server.TestJournalNode | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HADOOP-16266 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12966810/HADOOP-16266.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b85f8bbfd13b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git
[GitHub] [hadoop] hadoop-yetus commented on issue #568: HADOOP-15691 Add PathCapabilities to FS and FC to complement StreamCapabilities
hadoop-yetus commented on issue #568: HADOOP-15691 Add PathCapabilities to FS and FC to complement StreamCapabilities URL: https://github.com/apache/hadoop/pull/568#issuecomment-486043659 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 74 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 8 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 329 | Maven dependency ordering for branch | | +1 | mvninstall | 1442 | trunk passed | | +1 | compile | 1490 | trunk passed | | +1 | checkstyle | 222 | trunk passed | | +1 | mvnsite | 391 | trunk passed | | +1 | shadedclient | 1484 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 500 | trunk passed | | +1 | javadoc | 252 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for patch | | +1 | mvninstall | 195 | the patch passed | | +1 | compile | 1052 | the patch passed | | -1 | javac | 1052 | root generated 2 new + 1496 unchanged - 0 fixed = 1498 total (was 1496) | | -0 | checkstyle | 148 | root: The patch generated 11 new + 598 unchanged - 0 fixed = 609 total (was 598) | | +1 | mvnsite | 290 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 746 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 468 | the patch passed | | +1 | javadoc | 211 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 560 | hadoop-common in the patch passed. | | +1 | unit | 119 | hadoop-hdfs-client in the patch passed. | | +1 | unit | 271 | hadoop-hdfs-httpfs in the patch passed. | | +1 | unit | 299 | hadoop-aws in the patch passed. | | +1 | unit | 84 | hadoop-azure in the patch passed. | | +1 | unit | 60 | hadoop-azure-datalake in the patch passed. | | +1 | asflicense | 44 | The patch does not generate ASF License warnings. | | | | 10465 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-568/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/568 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 37f7dbdddc93 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / fec9bf4 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-568/4/artifact/out/diff-compile-javac-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-568/4/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-568/4/testReport/ | | Max. process+thread count | 1446 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure hadoop-tools/hadoop-azure-datalake U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-568/4/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
[ https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824741#comment-16824741 ] Hadoop QA commented on HADOOP-16269: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 4 new + 2 unchanged - 0 fixed = 6 total (was 2) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 7s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HADOOP-16269 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12966825/HADOOP-16269-003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 051dfbc40373 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9d40062 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/16186/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16186/testReport/ | | Max. process+thread count | 306 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16186/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org
[GitHub] [hadoop] jnp commented on a change in pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception
jnp commented on a change in pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception URL: https://github.com/apache/hadoop/pull/749#discussion_r277878283 ## File path: hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/CommitWatcher.java ## @@ -188,7 +188,6 @@ void releaseBuffersOnException() { */ public XceiverClientReply watchForCommit(long commitIndex) throws IOException { -Preconditions.checkState(!commitIndex2flushedDataMap.isEmpty()); Review comment: Why is the precondition removed? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jnp commented on a change in pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception
jnp commented on a change in pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception URL: https://github.com/apache/hadoop/pull/749#discussion_r277932422 ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java ## @@ -0,0 +1,365 @@ + +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.ozone.client.io; + +import com.google.common.base.Preconditions; +import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos; +import org.apache.hadoop.hdds.protocol.proto.HddsProtos; +import org.apache.hadoop.hdds.scm.XceiverClientManager; +import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList; +import org.apache.hadoop.hdds.scm.pipeline.PipelineID; +import org.apache.hadoop.hdds.scm.storage.BufferPool; +import org.apache.hadoop.ozone.OzoneConfigKeys; +import org.apache.hadoop.ozone.om.helpers.*; +import org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol; +import org.apache.hadoop.security.UserGroupInformation; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.ListIterator; + +/** + * This class manages the stream entries list and handles block allocation + * from OzoneManager. + */ +public class BlockOutputStreamEntryPool { + + public static final Logger LOG = + LoggerFactory.getLogger(BlockOutputStreamEntryPool.class); + + private final List streamEntries; + private int currentStreamIndex; + private final OzoneManagerProtocol omClient; + private final OmKeyArgs keyArgs; + private final XceiverClientManager xceiverClientManager; + private final int chunkSize; + private final String requestID; + private final long streamBufferFlushSize; + private final long streamBufferMaxSize; + private final long watchTimeout; + private final long blockSize; + private final int bytesPerChecksum; + private final ContainerProtos.ChecksumType checksumType; + private final BufferPool bufferPool; + private OmMultipartCommitUploadPartInfo commitUploadPartInfo; + private final long openID; + private ExcludeList excludeList; + + @SuppressWarnings("parameternumber") + public BlockOutputStreamEntryPool(OzoneManagerProtocol omClient, + int chunkSize, String requestId, HddsProtos.ReplicationFactor factor, + HddsProtos.ReplicationType type, long bufferFlushSize, long bufferMaxSize, + long size, long watchTimeout, ContainerProtos.ChecksumType checksumType, + int bytesPerChecksum, String uploadID, int partNumber, + boolean isMultipart, OmKeyInfo info, + XceiverClientManager xceiverClientManager, long openID) { +streamEntries = new ArrayList<>(); +currentStreamIndex = 0; +this.omClient = omClient; +this.keyArgs = new OmKeyArgs.Builder().setVolumeName(info.getVolumeName()) +.setBucketName(info.getBucketName()).setKeyName(info.getKeyName()) +.setType(type).setFactor(factor).setDataSize(info.getDataSize()) +.setIsMultipartKey(isMultipart).setMultipartUploadID(uploadID) +.setMultipartUploadPartNumber(partNumber).build(); +this.xceiverClientManager = xceiverClientManager; +this.chunkSize = chunkSize; +this.requestID = requestId; +this.streamBufferFlushSize = bufferFlushSize; +this.streamBufferMaxSize = bufferMaxSize; +this.blockSize = size; +this.watchTimeout = watchTimeout; +this.bytesPerChecksum = bytesPerChecksum; +this.checksumType = checksumType; +this.openID = openID; +this.excludeList = new ExcludeList(); + +Preconditions.checkState(chunkSize > 0); +Preconditions.checkState(streamBufferFlushSize > 0); +Preconditions.checkState(streamBufferMaxSize > 0); +Preconditions.checkState(blockSize > 0); +Preconditions.checkState(streamBufferFlushSize % chunkSize == 0); +Preconditions.checkState(streamBufferMaxSize % streamBufferFlushSize == 0); +Preconditions.checkState(blockSize % streamBufferMaxSize == 0); +this.bufferPool = +new BufferPool(chunkSize, (int) streamBufferMaxSize / chunkSize); + } + + public
[jira] [Assigned] (HADOOP-16270) [JDK11] Upgrade Maven Dependency Plugin to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-16270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-16270: -- Assignee: Xieming Li > [JDK11] Upgrade Maven Dependency Plugin to the latest version > - > > Key: HADOOP-16270 > URL: https://issues.apache.org/jira/browse/HADOOP-16270 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > > HADOOP-14979 upgraded maven dependency plugin to 3.0.2, but the version was > overridden to 3.0.1 by YARN-7129 and the following error occurred again. > {noformat} > [INFO] --- maven-dependency-plugin:3.0.1:list (deplist) @ hadoop-streaming --- > java.lang.NoSuchMethodException: > jdk.internal.module.ModuleReferenceImpl.descriptor() > at java.base/java.lang.Class.getDeclaredMethod(Class.java:2476) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getModuleDescriptor(DependencyStatusSets.java:272) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.buildArtifactListOutput(DependencyStatusSets.java:227) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getOutput(DependencyStatusSets.java:165) > at > org.apache.maven.plugins.dependency.resolvers.ResolveDependenciesMojo.doExecute(ResolveDependenciesMojo.java:90) > at > org.apache.maven.plugins.dependency.AbstractDependencyMojo.execute(AbstractDependencyMojo.java:143) > at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > {noformat} > Let's upgrade the plugin version to fix the build failure in Java 11. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] AngersZhuuuu commented on issue #756: [HDFS-14437]Fix BUG mentionted in HDFS-14437
AngersZh commented on issue #756: [HDFS-14437]Fix BUG mentionted in HDFS-14437 URL: https://github.com/apache/hadoop/pull/756#issuecomment-486036622 @arp7 Hi, can you help me review this ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16270) [JDK11] Upgrade Maven Dependency Plugin to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-16270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16270: --- Issue Type: Sub-task (was: Bug) Parent: HADOOP-15338 > [JDK11] Upgrade Maven Dependency Plugin to the latest version > - > > Key: HADOOP-16270 > URL: https://issues.apache.org/jira/browse/HADOOP-16270 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Akira Ajisaka >Priority: Major > Labels: newbie > > HADOOP-14979 upgraded maven dependency plugin to 3.0.2, but the version was > overridden to 3.0.1 by YARN-7129 and the following error occurred again. > {noformat} > [INFO] --- maven-dependency-plugin:3.0.1:list (deplist) @ hadoop-streaming --- > java.lang.NoSuchMethodException: > jdk.internal.module.ModuleReferenceImpl.descriptor() > at java.base/java.lang.Class.getDeclaredMethod(Class.java:2476) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getModuleDescriptor(DependencyStatusSets.java:272) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.buildArtifactListOutput(DependencyStatusSets.java:227) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getOutput(DependencyStatusSets.java:165) > at > org.apache.maven.plugins.dependency.resolvers.ResolveDependenciesMojo.doExecute(ResolveDependenciesMojo.java:90) > at > org.apache.maven.plugins.dependency.AbstractDependencyMojo.execute(AbstractDependencyMojo.java:143) > at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > {noformat} > Let's upgrade the plugin version to fix the build failure in Java 11. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16270) [JDK11] Upgrade Maven Dependency Plugin to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-16270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16270: --- Labels: newbie (was: ) > [JDK11] Upgrade Maven Dependency Plugin to the latest version > - > > Key: HADOOP-16270 > URL: https://issues.apache.org/jira/browse/HADOOP-16270 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Major > Labels: newbie > > HADOOP-14979 upgraded maven dependency plugin to 3.0.2, but the version was > overridden to 3.0.1 by YARN-7129 and the following error occurred again. > {noformat} > [INFO] --- maven-dependency-plugin:3.0.1:list (deplist) @ hadoop-streaming --- > java.lang.NoSuchMethodException: > jdk.internal.module.ModuleReferenceImpl.descriptor() > at java.base/java.lang.Class.getDeclaredMethod(Class.java:2476) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getModuleDescriptor(DependencyStatusSets.java:272) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.buildArtifactListOutput(DependencyStatusSets.java:227) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getOutput(DependencyStatusSets.java:165) > at > org.apache.maven.plugins.dependency.resolvers.ResolveDependenciesMojo.doExecute(ResolveDependenciesMojo.java:90) > at > org.apache.maven.plugins.dependency.AbstractDependencyMojo.execute(AbstractDependencyMojo.java:143) > at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > {noformat} > Let's upgrade the plugin version to fix the build failure in Java 11. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16270) [JDK11] Upgrade Maven Dependency Plugin to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-16270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16270: --- Target Version/s: 3.3.0 > [JDK11] Upgrade Maven Dependency Plugin to the latest version > - > > Key: HADOOP-16270 > URL: https://issues.apache.org/jira/browse/HADOOP-16270 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Major > Labels: newbie > > HADOOP-14979 upgraded maven dependency plugin to 3.0.2, but the version was > overridden to 3.0.1 by YARN-7129 and the following error occurred again. > {noformat} > [INFO] --- maven-dependency-plugin:3.0.1:list (deplist) @ hadoop-streaming --- > java.lang.NoSuchMethodException: > jdk.internal.module.ModuleReferenceImpl.descriptor() > at java.base/java.lang.Class.getDeclaredMethod(Class.java:2476) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getModuleDescriptor(DependencyStatusSets.java:272) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.buildArtifactListOutput(DependencyStatusSets.java:227) > at > org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getOutput(DependencyStatusSets.java:165) > at > org.apache.maven.plugins.dependency.resolvers.ResolveDependenciesMojo.doExecute(ResolveDependenciesMojo.java:90) > at > org.apache.maven.plugins.dependency.AbstractDependencyMojo.execute(AbstractDependencyMojo.java:143) > at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > {noformat} > Let's upgrade the plugin version to fix the build failure in Java 11. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16270) [JDK11] Upgrade Maven Dependency Plugin to the latest version
Akira Ajisaka created HADOOP-16270: -- Summary: [JDK11] Upgrade Maven Dependency Plugin to the latest version Key: HADOOP-16270 URL: https://issues.apache.org/jira/browse/HADOOP-16270 Project: Hadoop Common Issue Type: Bug Components: build Reporter: Akira Ajisaka HADOOP-14979 upgraded maven dependency plugin to 3.0.2, but the version was overridden to 3.0.1 by YARN-7129 and the following error occurred again. {noformat} [INFO] --- maven-dependency-plugin:3.0.1:list (deplist) @ hadoop-streaming --- java.lang.NoSuchMethodException: jdk.internal.module.ModuleReferenceImpl.descriptor() at java.base/java.lang.Class.getDeclaredMethod(Class.java:2476) at org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getModuleDescriptor(DependencyStatusSets.java:272) at org.apache.maven.plugins.dependency.utils.DependencyStatusSets.buildArtifactListOutput(DependencyStatusSets.java:227) at org.apache.maven.plugins.dependency.utils.DependencyStatusSets.getOutput(DependencyStatusSets.java:165) at org.apache.maven.plugins.dependency.resolvers.ResolveDependenciesMojo.doExecute(ResolveDependenciesMojo.java:90) at org.apache.maven.plugins.dependency.AbstractDependencyMojo.execute(AbstractDependencyMojo.java:143) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) {noformat} Let's upgrade the plugin version to fix the build failure in Java 11. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
[ https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824701#comment-16824701 ] Da Zhou commented on HADOOP-16269: -- Added continuation token for non-xns account. All tests passed: non-xns account: Tests run: 40, Failures: 0, Errors: 0, Skipped: 0 Tests run: 342, Failures: 0, Errors: 0, Skipped: 207 Tests run: 190, Failures: 0, Errors: 0, Skipped: 15 xns-account: Tests run: 40, Failures: 0, Errors: 0, Skipped: 0 Tests run: 342, Failures: 0, Errors: 0, Skipped: 21 Tests run: 190, Failures: 0, Errors: 0, Skipped: 15 > ABFS: add listFileStatus with StartFrom > --- > > Key: HADOOP-16269 > URL: https://issues.apache.org/jira/browse/HADOOP-16269 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch, > HADOOP-16269-003.patch > > > Adding a ListFileStatus in a path from a entry name in lexical order. > This is added to AzureBlobFileSystemStore and won't be exposed to FS level > api. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
[ https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou updated HADOOP-16269: - Attachment: HADOOP-16269-003.patch > ABFS: add listFileStatus with StartFrom > --- > > Key: HADOOP-16269 > URL: https://issues.apache.org/jira/browse/HADOOP-16269 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch, > HADOOP-16269-003.patch > > > Adding a ListFileStatus in a path from a entry name in lexical order. > This is added to AzureBlobFileSystemStore and won't be exposed to FS level > api. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests
[ https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824699#comment-16824699 ] Steven Rand commented on HADOOP-16264: -- [~smeng], would it makes more sense to run the tests on trunk instead of branch-3.1.2? My guess is that on branch-3.1.2 you're going to run into a lot of failures that have already been fixed, e.g., HADOOP-12760 like you mention above. > [JDK11] Track failing Hadoop unit tests > --- > > Key: HADOOP-16264 > URL: https://issues.apache.org/jira/browse/HADOOP-16264 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.2 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: test-run1.tgz > > > Although there are still a lot of work before we could compile Hadoop with > JDK 11 (HADOOP-15338), it is possible to compile Hadoop with JDK 8 and run > (e.g. HDFS NN/DN,YARN NM/RM) on JDK 11 at this moment. > But after compiling branch-3.1.2 with JDK 8, I ran unit tests with JDK 11 and > there are a LOT of unit test failures (44 out of 96 maven projects contain at > least one unit test failures according to maven reactor summary). This may > well indicate some functionalities are actually broken on JDK 11. Some of > them already have a jira number. Some of them might have been fixed in 3.2.0. > Some of them might share the same root cause. > By definition, this jira should be part of HADOOP-15338. But the goal of this > one is just to keep track of unit test failures and (hopefully) resolve all > of them soon. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] swagle commented on issue #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle)
swagle commented on issue #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle) URL: https://github.com/apache/hadoop/pull/759#issuecomment-486007180 Fixed with HDDS-1450. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] swagle closed pull request #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle)
swagle closed pull request #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle) URL: https://github.com/apache/hadoop/pull/759 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests
[ https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824616#comment-16824616 ] Giovanni Matteo Fumarola commented on HADOOP-16264: --- Thanks [~smeng]. Can you upload directly a list of failed tests? Please rerun the "Timed out waiting" test failures, to figure out if they are related to your machine, buggy, or flaky. > [JDK11] Track failing Hadoop unit tests > --- > > Key: HADOOP-16264 > URL: https://issues.apache.org/jira/browse/HADOOP-16264 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.2 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: test-run1.tgz > > > Although there are still a lot of work before we could compile Hadoop with > JDK 11 (HADOOP-15338), it is possible to compile Hadoop with JDK 8 and run > (e.g. HDFS NN/DN,YARN NM/RM) on JDK 11 at this moment. > But after compiling branch-3.1.2 with JDK 8, I ran unit tests with JDK 11 and > there are a LOT of unit test failures (44 out of 96 maven projects contain at > least one unit test failures according to maven reactor summary). This may > well indicate some functionalities are actually broken on JDK 11. Some of > them already have a jira number. Some of them might have been fixed in 3.2.0. > Some of them might share the same root cause. > By definition, this jira should be part of HADOOP-15338. But the goal of this > one is just to keep track of unit test failures and (hopefully) resolve all > of them soon. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer
[ https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824524#comment-16824524 ] Christopher Gregorian commented on HADOOP-16266: Thanks for the comments [~xkrogen]! Addressed most of them: added tests for the ProcessingDetails class but still need tests to ensure that the times are getting populated when a call is processed. Also still need to add comments around the math in {{Server#updateMetrics()}} (trying to understand it better first). > Add more fine-grained processing time metrics to the RPC layer > -- > > Key: HADOOP-16266 > URL: https://issues.apache.org/jira/browse/HADOOP-16266 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Christopher Gregorian >Assignee: Christopher Gregorian >Priority: Minor > Labels: rpc > Attachments: HADOOP-16266.001.patch, HADOOP-16266.002.patch > > > Splitting off of HDFS-14403 to track the first part: introduces more > fine-grained measuring of how a call's processing time is split up. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer
[ https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Gregorian updated HADOOP-16266: --- Attachment: HADOOP-16266.002.patch > Add more fine-grained processing time metrics to the RPC layer > -- > > Key: HADOOP-16266 > URL: https://issues.apache.org/jira/browse/HADOOP-16266 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Christopher Gregorian >Assignee: Christopher Gregorian >Priority: Minor > Labels: rpc > Attachments: HADOOP-16266.001.patch, HADOOP-16266.002.patch > > > Splitting off of HDFS-14403 to track the first part: introduces more > fine-grained measuring of how a call's processing time is split up. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
[ https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824511#comment-16824511 ] Hadoop QA commented on HADOOP-16269: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 4 new + 2 unchanged - 0 fixed = 6 total (was 2) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 57s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HADOOP-16269 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12966806/HADOOP-16269-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7c891b7dc5a3 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c504eee | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/16183/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16183/testReport/ | | Max. process+thread count | 341 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16183/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #762: HDDS-1455. Inconsistent naming convention with Ozone Kerberos configu…
hadoop-yetus commented on issue #762: HDDS-1455. Inconsistent naming convention with Ozone Kerberos configu… URL: https://github.com/apache/hadoop/pull/762#issuecomment-485969579 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 26 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for branch | | +1 | mvninstall | 1028 | trunk passed | | +1 | compile | 1067 | trunk passed | | +1 | checkstyle | 137 | trunk passed | | +1 | mvnsite | 164 | trunk passed | | +1 | shadedclient | 669 | branch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-hdds/docs hadoop-ozone/dist | | +1 | findbugs | 83 | trunk passed | | +1 | javadoc | 105 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for patch | | -1 | mvninstall | 19 | dist in the patch failed. | | +1 | compile | 1083 | the patch passed | | +1 | javac | 1083 | the patch passed | | +1 | checkstyle | 143 | the patch passed | | +1 | mvnsite | 123 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | shelldocs | 31 | There were no new shelldocs issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 668 | patch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-hdds/docs hadoop-ozone/dist | | +1 | findbugs | 91 | the patch passed | | +1 | javadoc | 108 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 94 | common in the patch failed. | | +1 | unit | 32 | docs in the patch passed. | | +1 | unit | 34 | dist in the patch passed. | | +1 | asflicense | 43 | The patch does not generate ASF License warnings. | | | | 6160 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.scm.net.TestNodeSchemaManager | | | hadoop.hdds.scm.net.TestNetworkTopologyImpl | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-762/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/762 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml shellcheck shelldocs | | uname | Linux a10b4104e3e5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 59ded76 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-762/1/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-762/1/artifact/out/patch-unit-hadoop-hdds_common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-762/1/testReport/ | | Max. process+thread count | 446 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/docs hadoop-ozone/dist U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-762/1/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12690) Consolidate access of sun.misc.Unsafe
[ https://issues.apache.org/jira/browse/HADOOP-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824508#comment-16824508 ] Hadoop QA commented on HADOOP-12690: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 43s{color} | {color:red} HADOOP-12690 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-12690 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12789041/HADOOP-12690-v3.1.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16184/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Consolidate access of sun.misc.Unsafe > -- > > Key: HADOOP-12690 > URL: https://issues.apache.org/jira/browse/HADOOP-12690 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Junping Du >Priority: Major > Attachments: HADOOP-12690-v2.1.patch, HADOOP-12690-v2.patch, > HADOOP-12690-v3.1.patch, HADOOP-12690-v3.patch, HADOOP-12690.patch > > > Per discussion in Hadoop-12630 > (https://issues.apache.org/jira/browse/HADOOP-12630?focusedCommentId=15082142=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15082142), > we found the access of sun.misc.Unsafe could be problematic for some JVMs in > other platforms. Also, hints from other comments, it is better to consolidate > it as a helper/utility method to shared with several places > (FastByteComparisons, NativeIO, ShortCircuitShm). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jnp commented on a change in pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception
jnp commented on a change in pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception URL: https://github.com/apache/hadoop/pull/749#discussion_r277862815 ## File path: hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java ## @@ -574,14 +574,18 @@ public void cleanup(boolean invalidateClient) { * @throws IOException if stream is closed */ private void checkOpen() throws IOException { -if (xceiverClient == null) { +if (isClosed()) { throw new IOException("BlockOutputStream has been closed."); } else if (getIoException() != null) { adjustBuffersOnException(); throw getIoException(); } } + public boolean isClosed() { Review comment: This need not be public. If you need it for testing, please annotate. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jnp commented on a change in pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception
jnp commented on a change in pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception URL: https://github.com/apache/hadoop/pull/749#discussion_r277863089 ## File path: hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java ## @@ -574,14 +574,18 @@ public void cleanup(boolean invalidateClient) { * @throws IOException if stream is closed */ private void checkOpen() throws IOException { -if (xceiverClient == null) { +if (isClosed()) { throw new IOException("BlockOutputStream has been closed."); } else if (getIoException() != null) { adjustBuffersOnException(); throw getIoException(); } } + public boolean isClosed() { Review comment: isClosed method need not be public. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12690) Consolidate access of sun.misc.Unsafe
[ https://issues.apache.org/jira/browse/HADOOP-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824505#comment-16824505 ] Wei-Chiu Chuang commented on HADOOP-12690: -- We probably want to redo this work. sun.misc.Unsafe is removed from JDK11 so better to remove the use of this API entirely. Similar to HADOOP-12760. > Consolidate access of sun.misc.Unsafe > -- > > Key: HADOOP-12690 > URL: https://issues.apache.org/jira/browse/HADOOP-12690 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Junping Du >Priority: Major > Attachments: HADOOP-12690-v2.1.patch, HADOOP-12690-v2.patch, > HADOOP-12690-v3.1.patch, HADOOP-12690-v3.patch, HADOOP-12690.patch > > > Per discussion in Hadoop-12630 > (https://issues.apache.org/jira/browse/HADOOP-12630?focusedCommentId=15082142=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15082142), > we found the access of sun.misc.Unsafe could be problematic for some JVMs in > other platforms. Also, hints from other comments, it is better to consolidate > it as a helper/utility method to shared with several places > (FastByteComparisons, NativeIO, ShortCircuitShm). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on issue #762: HDDS-1455. Inconsistent naming convention with Ozone Kerberos configu…
anuengineer commented on issue #762: HDDS-1455. Inconsistent naming convention with Ozone Kerberos configu… URL: https://github.com/apache/hadoop/pull/762#issuecomment-485953416 +1, pending jenkins. Thanks for fixing this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16082) FsShell ls: Add option -i to print inode id
[ https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824485#comment-16824485 ] Siyao Meng commented on HADOOP-16082: - [~adam.antal] Thanks for the comment. I believe your idea is another way to display the inode id, but not how we could expose fileId from FileStatus, which is actual problem I'm facing. Since fileId is only in HdfsLocatedFileStatus but not in FileStatus and I shouldn't risk changing a public stable interface (just in order to expose fileId), I have to work around this. I think I'll stick to Steve's suggestion of using toString() then. > FsShell ls: Add option -i to print inode id > --- > > Key: HADOOP-16082 > URL: https://issues.apache.org/jira/browse/HADOOP-16082 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.0, 3.1.1 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16082.001.patch > > > When debugging the FSImage corruption issue, I often need to know a file's or > directory's inode id. At this moment, the only way to do that is to use OIV > tool to dump the FSImage and look up the filename, which is very inefficient. > Here I propose adding option "-i" in FsShell that prints files' or > directories' inode id. > h2. Implementation > h3. For hdfs:// (HDFS) > fileId exists in HdfsLocatedFileStatus, which is already returned to > hdfs-client. We just need to print it in Ls#processPath(). > h3. For file:// (Local FS) > h4. Linux > Use java.nio. > h4. Windows > Windows has the concept of "File ID" which is similar to inode id. It is > unique in NTFS and ReFS. > h3. For other FS > The fileId entry will be "0" in FileStatus if it is not set. We could either > ignore or throw an exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
[ https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou updated HADOOP-16269: - Attachment: HADOOP-16269-002.patch > ABFS: add listFileStatus with StartFrom > --- > > Key: HADOOP-16269 > URL: https://issues.apache.org/jira/browse/HADOOP-16269 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch > > > Adding a ListFileStatus in a path from a entry name in lexical order. > This is added to AzureBlobFileSystemStore and won't be exposed to FS level > api. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer
[ https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824478#comment-16824478 ] Íñigo Goiri commented on HADOOP-16266: -- At some point I had issues with monotonicNowNanos() and NUMA. I think [~ste...@apache.org] mentioned this issue in some other JIRA. I would be careful with that. Other than that, I think that changing the units might be an issue. By default, we should leave the interfaces as they were before and make this configurable. In any case, I think we need to increase the code coverage substantially here. We can add sleeps and verify that the numbers are larger than the sleep itself for each component. Similar for coverage for the time units. > Add more fine-grained processing time metrics to the RPC layer > -- > > Key: HADOOP-16266 > URL: https://issues.apache.org/jira/browse/HADOOP-16266 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Christopher Gregorian >Assignee: Christopher Gregorian >Priority: Minor > Labels: rpc > Attachments: HADOOP-16266.001.patch > > > Splitting off of HDFS-14403 to track the first part: introduces more > fine-grained measuring of how a call's processing time is split up. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer
[ https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824476#comment-16824476 ] Hadoop QA commented on HADOOP-16266: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 4s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 23s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 18s{color} | {color:orange} root: The patch generated 9 new + 299 unchanged - 4 fixed = 308 total (was 303) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 11s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}223m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ipc.TestProtoBufRpc | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HADOOP-16266 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12966783/HADOOP-16266.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e63d7885f0cd 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | |
[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
[ https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824456#comment-16824456 ] Hadoop QA commented on HADOOP-16269: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 16s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 11 new + 2 unchanged - 0 fixed = 13 total (was 2) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 9s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 53s{color} | {color:red} hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 58m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-tools/hadoop-azure | | | Found reliance on default encoding in org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.generateContinuationTokenToStart(String, String):in org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.generateContinuationTokenToStart(String, String): String.getBytes() At AzureBlobFileSystemStore.java:[line 622] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HADOOP-16269 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12966795/HADOOP-16269-001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2c75233be311 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 59ded76 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle |
[GitHub] [hadoop] hadoop-yetus commented on issue #758: HDDS-999. Make the DNS resolution in OzoneManager more resilient. (swagle)
hadoop-yetus commented on issue #758: HDDS-999. Make the DNS resolution in OzoneManager more resilient. (swagle) URL: https://github.com/apache/hadoop/pull/758#issuecomment-485940483 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 25 | Docker mode activated. | ||| _ Prechecks _ | | 0 | yamllint | 0 | yamllint was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 13 | Maven dependency ordering for branch | | +1 | mvninstall | 1022 | trunk passed | | +1 | compile | 107 | trunk passed | | +1 | checkstyle | 25 | trunk passed | | +1 | mvnsite | 52 | trunk passed | | +1 | shadedclient | 687 | branch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/dist | | +1 | findbugs | 37 | trunk passed | | +1 | javadoc | 34 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 12 | Maven dependency ordering for patch | | -1 | mvninstall | 18 | dist in the patch failed. | | +1 | compile | 101 | the patch passed | | +1 | javac | 101 | the patch passed | | +1 | checkstyle | 23 | the patch passed | | +1 | mvnsite | 42 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 683 | patch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/dist | | +1 | findbugs | 43 | the patch passed | | +1 | javadoc | 31 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 38 | ozone-manager in the patch passed. | | +1 | unit | 21 | dist in the patch passed. | | +1 | asflicense | 25 | The patch does not generate ASF License warnings. | | | | 3139 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-758/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/758 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient yamllint findbugs checkstyle | | uname | Linux 33f2157700e6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 59ded76 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-758/2/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-758/2/testReport/ | | Max. process+thread count | 446 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/dist U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-758/2/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao opened a new pull request #762: HDDS-1455. Inconsistent naming convention with Ozone Kerberos configu…
xiaoyuyao opened a new pull request #762: HDDS-1455. Inconsistent naming convention with Ozone Kerberos configu… URL: https://github.com/apache/hadoop/pull/762 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
[ https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou updated HADOOP-16269: - Status: Patch Available (was: Open) > ABFS: add listFileStatus with StartFrom > --- > > Key: HADOOP-16269 > URL: https://issues.apache.org/jira/browse/HADOOP-16269 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-16269-001.patch > > > Adding a ListFileStatus in a path from a entry name in lexical order. > This is added to AzureBlobFileSystemStore and won't be exposed to FS level > api. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
[ https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824401#comment-16824401 ] Da Zhou commented on HADOOP-16269: -- All tests passed: Tests run: 40, Failures: 0, Errors: 0, Skipped: 0 Tests run: 342, Failures: 0, Errors: 0, Skipped: 21 Tests run: 190, Failures: 0, Errors: 0, Skipped: 15 BUILD SUCCESS > ABFS: add listFileStatus with StartFrom > --- > > Key: HADOOP-16269 > URL: https://issues.apache.org/jira/browse/HADOOP-16269 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-16269-001.patch > > > Adding a ListFileStatus in a path from a entry name in lexical order. > This is added to AzureBlobFileSystemStore and won't be exposed to FS level > api. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
[ https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou updated HADOOP-16269: - Attachment: HADOOP-16269-001.patch > ABFS: add listFileStatus with StartFrom > --- > > Key: HADOOP-16269 > URL: https://issues.apache.org/jira/browse/HADOOP-16269 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-16269-001.patch > > > Adding a ListFileStatus in a path from a entry name in lexical order. > This is added to AzureBlobFileSystemStore and won't be exposed to FS level > api. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
Da Zhou created HADOOP-16269: Summary: ABFS: add listFileStatus with StartFrom Key: HADOOP-16269 URL: https://issues.apache.org/jira/browse/HADOOP-16269 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Affects Versions: 3.2.0 Reporter: Da Zhou Assignee: Da Zhou Adding a ListFileStatus in a path from a entry name in lexical order. This is added to AzureBlobFileSystemStore and won't be exposed to FS level api. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824396#comment-16824396 ] YangY edited comment on HADOOP-15616 at 4/23/19 6:13 PM: - [~ste...@apache.org] Thank you very much for your reply. I do understand the effort to support and maintain an object store in the Apache Hadoop Community, and will try my best to achieve it. I will improve the test code as soon as possible based on your comments. Thanks again for your continued attention to this work. was (Author: yuyang733): [~ste...@apache.org] Thank you very much for your reply. I do understand the effort to support and maintain an object store in the Apache Hadoop Community, and will try my best to achieve it. I will improve the test code as soon as possible based on your comments. Thanks again for your continued attention to this patch. > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > HADOOP-15616.009.patch, Tencent-COS-Integrated-v2.pdf, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824396#comment-16824396 ] YangY edited comment on HADOOP-15616 at 4/23/19 6:13 PM: - [~ste...@apache.org] Thank you very much for your reply. I do understand the effort to support and maintain an object store in the Apache Hadoop Community, and will try my best to achieve it. I will improve the test code as soon as possible based on your comments. Thanks again for your continued attention to this patch. was (Author: yuyang733): [~ste...@apache.org] Thank you very much for your reply. I do understand the effort to support and maintain an object store in the Apache Hadoop Community, and will try my best to achieve it. I will improve the test code as soon as possible based on your comments. > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > HADOOP-15616.009.patch, Tencent-COS-Integrated-v2.pdf, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824396#comment-16824396 ] YangY commented on HADOOP-15616: [~ste...@apache.org] Thank you very much for your reply. I do understand the effort to support and maintain an object store in the Apache Hadoop Community, and will try my best to achieve it. I will improve the test code as soon as possible based on your comments. > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > HADOOP-15616.009.patch, Tencent-COS-Integrated-v2.pdf, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16263) Update BUILDING.txt with macOS native build instructions
[ https://issues.apache.org/jira/browse/HADOOP-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824384#comment-16824384 ] Siyao Meng commented on HADOOP-16263: - Progress update: I've finished the section of the doc. I will test it on a clean install before posting the diff. > Update BUILDING.txt with macOS native build instructions > > > Key: HADOOP-16263 > URL: https://issues.apache.org/jira/browse/HADOOP-16263 > Project: Hadoop Common > Issue Type: Task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Minor > > I recently tried to compile Hadoop native on a Mac and found a few catches, > involving fixing some YARN native compiling issues (YARN-8622, YARN-9487). > Also, need to specify OpenSSL (brewed) header include dir when building > native with maven on a Mac. Should update BUILDING.txt for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] swagle commented on a change in pull request #758: HDDS-999. Make the DNS resolution in OzoneManager more resilient. (swagle)
swagle commented on a change in pull request #758: HDDS-999. Make the DNS resolution in OzoneManager more resilient. (swagle) URL: https://github.com/apache/hadoop/pull/758#discussion_r277794978 ## File path: hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-compose.yaml ## @@ -36,7 +36,6 @@ services: - 9890:9872 environment: ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION - WAITFOR: scm:9876 Review comment: Making the changes to remove the wait for, @elek it makes sense to remove the wait for from everywhere, right? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11452) Make FileSystem.rename(path, path, options) public, specified, tested
[ https://issues.apache.org/jira/browse/HADOOP-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824375#comment-16824375 ] Steve Loughran commented on HADOOP-11452: - BTW, after making this public (and backporting all the way back to Hadoop branch-2), I'd like to move to a builder version of this, somethuing like {code} CompleteableFuture renamed = fs.rename() .withSource(Path or FileStatus) .withDest(Path or FileStatus) .withPermissions(perms) .must(...) .opt(...) .build() {code} This makes it more obvious that async renames are a good thuing, and gives us the ability to pass a fileStatus in as a source/dest rather than a path *so avoiding the need to call getFileStatus again*. This is a good thing. That would be something I wouldn't backport > Make FileSystem.rename(path, path, options) public, specified, tested > - > > Key: HADOOP-11452 > URL: https://issues.apache.org/jira/browse/HADOOP-11452 > Project: Hadoop Common > Issue Type: Task > Components: fs >Affects Versions: 2.7.3 >Reporter: Yi Liu >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-11452-001.patch, HADOOP-11452-002.patch, > HADOOP-14452-004.patch, HADOOP-14452-branch-2-003.patch > > > Currently in {{FileSystem}}, {{rename}} with _Rename options_ is protected > and with _deprecated_ annotation. And the default implementation is not > atomic. > So this method is not able to be used outside. On the other hand, HDFS has a > good and atomic implementation. (Also an interesting thing in {{DFSClient}}, > the _deprecated_ annotations for these two methods are opposite). > It makes sense to make public for {{rename}} with _Rename options_, since > it's atomic for rename+overwrite, also it saves RPC calls if user desires > rename+overwrite. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer
[ https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824359#comment-16824359 ] Erik Krogen commented on HADOOP-16266: -- Hey [~cgregori], looking good overall! I have some small comments throughout; see below. Besides those, this could use some unit tests. I still like the idea of moving to micro or nanoseconds for the processing time, but am still worried... Ping [~jojochuang] again and also [~elgoiri] for thoughts on whether we can change the units of these metrics. * We typically indent parameter lists by 4, rather than to line up with the first parameter, like: {code:java} @Override public void addResponseTime(String name, Schedulable obj, ProcessingDetails details) { } {code} rather than {code:java} @Override public void addResponseTime(String name, Schedulable obj, ProcessingDetails details) { } {code} * You have some extraneous whitespace formatting changes, unless you're making a change in that line already, it's often better to leave them unchanged (it makes backports easier). * I would prefer to avoid the loss of precision by casting to {{int}} in {{DecayRpcScheduler#addResponseTime()}}. Can we simply throw an {{UnsupportedOperationException}} within the old version of the method, and move the implementation to the new one? I don't think the old one will be called anywhere; we are just leaving it for compatibility with the {{RpcScheduler}} interface. * Can we add a {{default}} implementation of the new method within {{RpcScheduler}}? Otherwise old implementations of this class which may be floating around will fail. See what I mean [here|https://dzone.com/articles/interface-default-methods-java]. After this you shouldn't need to make changes to {{TestRpcScheduler}}. * For the parameter names {{start}} and {{delta}} within {{FSNamesystemLock}}, can we rename them to {{startNanos}} and {{deltaNanos}} ? * Does {{ProcessingDetails.Timing.newEnumArray()}} need to be {{public}} ? Does it even need to exist at all? It seems it's only used in one place. * Within {{ProtobufRpcEngine}}, we took away the following: {code:java} } finally { currentCallInfo.set(null); {code} But I think we should still be resetting the current call info. Also, it looks like now we only catch and update the name for {{ServiceException}}, but previously it was for all {{Exception}}, which I think should continue to be the case. Also in this method, can we store {{Server.getCurCall().get()}} before the {{try}} block, and then just re-use it? {{ThreadLocal}} is cheap but not free. (same for {{WritableRpcEngine}}) * {{Server#logSlowRpcCalls()}} should probably use a {{long}} instead of {{int}} if we are going to use microseconds. Same for the {{RpcMetrics}} methods. * Can you add comments describing the various arithmetic ops you do within {{Server#updateMetrics()}}? * If we change the {{Call.timestamp}} field from millis to nanos, can we also rename it to {{timestampNanos}}? * For {{Call.detailedMetricsName}}, if there are any accesses of {{getDetailedMetricsName()}} in a different thread from the one which called {{setDetailedMetricsName()}}, it needs to be {{volatile}}. I'm not sure if this is the case but it's probably best to do it just in case. * Can you add at least some simple Javadoc for the new {{Call}} methods? * It seems a {{LOG}} message was removed from both of the {{RpcEngine}} classes. A new one was added within {{Server}}, but it is missing information: the method name, whether it was deferred, and the exception name. > Add more fine-grained processing time metrics to the RPC layer > -- > > Key: HADOOP-16266 > URL: https://issues.apache.org/jira/browse/HADOOP-16266 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Christopher Gregorian >Assignee: Christopher Gregorian >Priority: Minor > Labels: rpc > Attachments: HADOOP-16266.001.patch > > > Splitting off of HDFS-14403 to track the first part: introduces more > fine-grained measuring of how a call's processing time is split up. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824348#comment-16824348 ] Steve Loughran commented on HADOOP-15616: - Just taken a quick look. First, I need to warn you that all object store patches take a while to get in, as its a long-term commitment to maintaining them. So we have to worry about testability as well as actual implementation code. I've just looked at the test code. * All tests against the live store should have the prefix ITest, not Test, and set up so that they run in the mvn verify stage. And all which can run in parallel do so, for performance. Look at hadoop-azure and hadoop-aws for examples here. * If your store doesn't support append, don't bother with the subclass to skip it. * But do add tests for the other core contract operations, such as {{AbstractContractGetFileStatusTest}}, {{AbstractContractDistCpTest}} and ideally, {{FSMainOperationsBaseTest}}. I'm too overloaded with commitments to stand a chance of reviewing these, and not set up to test. [~djp] -can you commit some time to this? > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > HADOOP-15616.009.patch, Tencent-COS-Integrated-v2.pdf, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16267) Performance gain if you use replace() instead of replaceAll() for replacing patterns that do not use a regex
[ https://issues.apache.org/jira/browse/HADOOP-16267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824339#comment-16824339 ] Hadoop QA commented on HADOOP-16267: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-16267 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-16267 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12966565/HADOOP-16267.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16181/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Performance gain if you use replace() instead of replaceAll() for replacing > patterns that do not use a regex > - > > Key: HADOOP-16267 > URL: https://issues.apache.org/jira/browse/HADOOP-16267 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.2 >Reporter: bd2019us >Priority: Minor > Labels: pull-request-available > Attachments: HADOOP-16267.patch > > > Performance gain if you use replace() instead of replaceAll() for replacing > patterns that do not use a regex. This happens because replace() does not > need to compile the regex pattern like replaceAll() does. > Affected files: > * > hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java > * > hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java > * > hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/PrintJarMainClass.java -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-7729) Send back valid HTTP response if user hits IPC port with HTTP GET
[ https://issues.apache.org/jira/browse/HADOOP-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824337#comment-16824337 ] Steve Loughran commented on HADOOP-7729: if these are your IPC ports, then they are false alarms. All the IPC ports do is recognise when a GET request has come in, and tell the user to go away. These are not Web servers of any kind. That said, do see if you can try any overflow attacks to see if a GET with a very large amount of path breaks; I'm now curious about that > Send back valid HTTP response if user hits IPC port with HTTP GET > - > > Key: HADOOP-7729 > URL: https://issues.apache.org/jira/browse/HADOOP-7729 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Affects Versions: 0.23.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Fix For: 2.0.0-alpha > > Attachments: hadoop-7729.txt > > > Often, I've seen users get confused between the IPC ports and HTTP ports for > a daemon. It would be easy for us to detect when an HTTP GET request hits an > IPC port, and instead of sending back garbage, we can send back a valid HTTP > response explaining their mistake. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16267) Performance gain if you use replace() instead of replaceAll() for replacing patterns that do not use a regex
[ https://issues.apache.org/jira/browse/HADOOP-16267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16267: Affects Version/s: 3.1.2 Status: Patch Available (was: Open) > Performance gain if you use replace() instead of replaceAll() for replacing > patterns that do not use a regex > - > > Key: HADOOP-16267 > URL: https://issues.apache.org/jira/browse/HADOOP-16267 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.2 >Reporter: bd2019us >Priority: Minor > Labels: pull-request-available > Attachments: HADOOP-16267.patch > > > Performance gain if you use replace() instead of replaceAll() for replacing > patterns that do not use a regex. This happens because replace() does not > need to compile the regex pattern like replaceAll() does. > Affected files: > * > hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java > * > hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java > * > hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/PrintJarMainClass.java -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle)
hadoop-yetus commented on issue #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle) URL: https://github.com/apache/hadoop/pull/759#issuecomment-485887284 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 27 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1116 | trunk passed | | +1 | compile | 42 | trunk passed | | +1 | checkstyle | 19 | trunk passed | | +1 | mvnsite | 37 | trunk passed | | +1 | shadedclient | 692 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 73 | trunk passed | | +1 | javadoc | 41 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 43 | the patch passed | | +1 | compile | 31 | the patch passed | | +1 | javac | 31 | the patch passed | | +1 | checkstyle | 14 | the patch passed | | +1 | mvnsite | 35 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 737 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 87 | the patch passed | | +1 | javadoc | 37 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 69 | common in the patch failed. | | +1 | asflicense | 25 | The patch does not generate ASF License warnings. | | | | 3208 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.scm.net.TestNodeSchemaManager | | | hadoop.hdds.scm.net.TestNetworkTopologyImpl | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-759/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/759 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux c8172ebdc554 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 59ded76 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-759/2/artifact/out/patch-unit-hadoop-hdds_common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-759/2/testReport/ | | Max. process+thread count | 445 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common U: hadoop-hdds/common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-759/2/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on a change in pull request #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle)
anuengineer commented on a change in pull request #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle) URL: https://github.com/apache/hadoop/pull/759#discussion_r277766213 ## File path: hadoop-hdds/common/src/main/resources/ozone-default.xml ## @@ -2286,12 +2286,20 @@ ozone.metadata.dirs. + +ozone.scm.network.topology.schema.file.type +xml +OZONE, MANAGEMENT Review comment: I think we should add a new tag called network since Management sounds too wide and catch all. Or if you want to be more specific, Topology might be good too. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #758: HDDS-999. Make the DNS resolution in OzoneManager more resilient. (swagle)
bharatviswa504 commented on a change in pull request #758: HDDS-999. Make the DNS resolution in OzoneManager more resilient. (swagle) URL: https://github.com/apache/hadoop/pull/758#discussion_r277764668 ## File path: hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-compose.yaml ## @@ -36,7 +36,6 @@ services: - 9890:9872 environment: ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION - WAITFOR: scm:9876 Review comment: We removed WAITFOR env usage, there are few other files where this is used, om-statefulset.yaml. Do we need to remove from there also? And also we are removing usage of WAITFOR, then do we need to remove the logic for this in docker image code? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer
[ https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Gregorian updated HADOOP-16266: --- Status: Patch Available (was: Open) > Add more fine-grained processing time metrics to the RPC layer > -- > > Key: HADOOP-16266 > URL: https://issues.apache.org/jira/browse/HADOOP-16266 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Christopher Gregorian >Assignee: Christopher Gregorian >Priority: Minor > Labels: rpc > Attachments: HADOOP-16266.001.patch > > > Splitting off of HDFS-14403 to track the first part: introduces more > fine-grained measuring of how a call's processing time is split up. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer
[ https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Gregorian updated HADOOP-16266: --- Attachment: (was: HDFS-16266.001.patch) > Add more fine-grained processing time metrics to the RPC layer > -- > > Key: HADOOP-16266 > URL: https://issues.apache.org/jira/browse/HADOOP-16266 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Christopher Gregorian >Assignee: Christopher Gregorian >Priority: Minor > Labels: rpc > Attachments: HADOOP-16266.001.patch > > > Splitting off of HDFS-14403 to track the first part: introduces more > fine-grained measuring of how a call's processing time is split up. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer
[ https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Gregorian updated HADOOP-16266: --- Attachment: HADOOP-16266.001.patch > Add more fine-grained processing time metrics to the RPC layer > -- > > Key: HADOOP-16266 > URL: https://issues.apache.org/jira/browse/HADOOP-16266 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Christopher Gregorian >Assignee: Christopher Gregorian >Priority: Minor > Labels: rpc > Attachments: HADOOP-16266.001.patch > > > Splitting off of HDFS-14403 to track the first part: introduces more > fine-grained measuring of how a call's processing time is split up. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] swagle commented on a change in pull request #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle)
swagle commented on a change in pull request #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle) URL: https://github.com/apache/hadoop/pull/759#discussion_r277754032 ## File path: hadoop-hdds/common/src/main/resources/ozone-default.xml ## @@ -2286,12 +2286,20 @@ ozone.metadata.dirs. + +ozone.scm.network.topology.schema.file.type +xml +OZONE, MANAGEMENT Review comment: Thanks for review @bharatviswa504, updated the patch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle)
bharatviswa504 commented on a change in pull request #759: HDDS-1453. Fix unit test TestConfigurationFields broken on trunk. (swagle) URL: https://github.com/apache/hadoop/pull/759#discussion_r277751882 ## File path: hadoop-hdds/common/src/main/resources/ozone-default.xml ## @@ -2286,12 +2286,20 @@ ozone.metadata.dirs. + +ozone.scm.network.topology.schema.file.type +xml +OZONE, MANAGEMENT Review comment: I think here in tag do we need to add SCM also? (As this is related to SCM) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nandakumar131 merged pull request #755: HDDS-1411. Add unit test to check if SCM correctly sends close commands for containers in closing state after a restart.
nandakumar131 merged pull request #755: HDDS-1411. Add unit test to check if SCM correctly sends close commands for containers in closing state after a restart. URL: https://github.com/apache/hadoop/pull/755 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on issue #757: HDDS-1450. Fix nightly run failures after HDDS-976. Contributed by Xi…
xiaoyuyao commented on issue #757: HDDS-1450. Fix nightly run failures after HDDS-976. Contributed by Xi… URL: https://github.com/apache/hadoop/pull/757#issuecomment-485850124 bq. Does this fix the nightly build? It seems not fix "the good.xml file not found" issue, how does it work? The reason of the failure I guess is that ozone.scm.network.topology.schema.file.type does not have a default value. By removing this key, init() loading will be based on file extension directly. It seems to be working based on the result here: https://ci.anzix.net/job/ozone/16691/testReport/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16259) Distcp to set S3 Storage Class
[ https://issues.apache.org/jira/browse/HADOOP-16259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824161#comment-16824161 ] Kai Xie commented on HADOOP-16259: -- class FSProtos is generated from protobuf [FSProtos.proto|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/proto/FSProtos.proto]. when you did `mvn install`, the maven plugin will compile the [protobuf file|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/pom.xml#L411]. > Distcp to set S3 Storage Class > -- > > Key: HADOOP-16259 > URL: https://issues.apache.org/jira/browse/HADOOP-16259 > Project: Hadoop Common > Issue Type: New Feature > Components: hadoop-aws, tools/distcp >Affects Versions: 2.8.4 >Reporter: Prakash Gopalsamy >Priority: Minor > Attachments: ENHANCE_HADOOP_DISTCP_FOR_CUSTOM_S3_STORAGE_CLASS.docx > > Original Estimate: 168h > Remaining Estimate: 168h > > Hadoop distcp implementation doesn’t have properties to override Storage > class while transferring data to Amazon S3 storage. Hadoop distcp doesn’t set > any storage class while transferring data to Amazon S3 storage. Due to this > all the objects moved from cluster to S3 using Hadoop Distcp are been stored > in the default storage class “STANDARD”. By providing a new feature to > override the default S3 storage class through configuration properties will > be helpful to upload objects in other storage classes. I have come up with a > design to implement this feature in a design document and uploaded the same > in the JIRA. Kindly review and let me know for your suggestions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests
[ https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824152#comment-16824152 ] Siyao Meng commented on HADOOP-16264: - [~adam.antal] I've just uploaded a new run with branch-3.1.2 + HADOOP-12760 (solves sun,misc.cleaner error). You can use this keyword to search for unit test failures in the log to find which components are failing: "<<< FAILURE!" Environment: Ubuntu 18.04.2 LTS, OpenJDK 11.0.2u9 Reactor summary shows 68 projects succeeded, 29 projects have one or more unit tests failure. I saw quite a few "Timed out waiting" in the log. Could be buggy or flaky. I wonder if there is a list of known flaky unit tests that we could ignore for the time being and focus on other important ones? > [JDK11] Track failing Hadoop unit tests > --- > > Key: HADOOP-16264 > URL: https://issues.apache.org/jira/browse/HADOOP-16264 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.2 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: test-run1.tgz > > > Although there are still a lot of work before we could compile Hadoop with > JDK 11 (HADOOP-15338), it is possible to compile Hadoop with JDK 8 and run > (e.g. HDFS NN/DN,YARN NM/RM) on JDK 11 at this moment. > But after compiling branch-3.1.2 with JDK 8, I ran unit tests with JDK 11 and > there are a LOT of unit test failures (44 out of 96 maven projects contain at > least one unit test failures according to maven reactor summary). This may > well indicate some functionalities are actually broken on JDK 11. Some of > them already have a jira number. Some of them might have been fixed in 3.2.0. > Some of them might share the same root cause. > By definition, this jira should be part of HADOOP-15338. But the goal of this > one is just to keep track of unit test failures and (hopefully) resolve all > of them soon. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests
[ https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HADOOP-16264: Attachment: test-run1.tgz > [JDK11] Track failing Hadoop unit tests > --- > > Key: HADOOP-16264 > URL: https://issues.apache.org/jira/browse/HADOOP-16264 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.2 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: test-run1.tgz > > > Although there are still a lot of work before we could compile Hadoop with > JDK 11 (HADOOP-15338), it is possible to compile Hadoop with JDK 8 and run > (e.g. HDFS NN/DN,YARN NM/RM) on JDK 11 at this moment. > But after compiling branch-3.1.2 with JDK 8, I ran unit tests with JDK 11 and > there are a LOT of unit test failures (44 out of 96 maven projects contain at > least one unit test failures according to maven reactor summary). This may > well indicate some functionalities are actually broken on JDK 11. Some of > them already have a jira number. Some of them might have been fixed in 3.2.0. > Some of them might share the same root cause. > By definition, this jira should be part of HADOOP-15338. But the goal of this > one is just to keep track of unit test failures and (hopefully) resolve all > of them soon. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #761: HADOOP-13386 Upgrade avro version in Hadoop
hadoop-yetus commented on issue #761: HADOOP-13386 Upgrade avro version in Hadoop URL: https://github.com/apache/hadoop/pull/761#issuecomment-485795954 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 30 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 61 | Maven dependency ordering for branch | | +1 | mvninstall | 1020 | trunk passed | | +1 | compile | 976 | trunk passed | | +1 | mvnsite | 52 | trunk passed | | +1 | shadedclient | 2778 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 47 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 25 | Maven dependency ordering for patch | | +1 | mvninstall | 133 | the patch passed | | +1 | compile | 918 | the patch passed | | +1 | javac | 918 | the patch passed | | +1 | mvnsite | 63 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 3 | The patch has no ill-formed XML file. | | +1 | shadedclient | 651 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 46 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 20 | hadoop-project in the patch passed. | | +1 | unit | 36 | hadoop-client-runtime in the patch passed. | | +1 | asflicense | 49 | The patch does not generate ASF License warnings. | | | | 4981 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-761/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/761 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 8500e153291e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 8a95ea6 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-761/1/testReport/ | | Max. process+thread count | 444 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-client-modules/hadoop-client-runtime U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-761/1/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16082) FsShell ls: Add option -i to print inode id
[ https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824083#comment-16824083 ] Adam Antal commented on HADOOP-16082: - I might not get the full context of this, but as HdfsLocatedFileStatus is intended for private use so its inode-id. The only occasion when inode-id is communicating with the clients is when we ls into "/.reserved/.inodes/...". I was wondering whether the reserved virtual uri would solve the problem - something like "/.reserved/.listId/path/to/hdfs/dir" would be good for this? > FsShell ls: Add option -i to print inode id > --- > > Key: HADOOP-16082 > URL: https://issues.apache.org/jira/browse/HADOOP-16082 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.0, 3.1.1 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16082.001.patch > > > When debugging the FSImage corruption issue, I often need to know a file's or > directory's inode id. At this moment, the only way to do that is to use OIV > tool to dump the FSImage and look up the filename, which is very inefficient. > Here I propose adding option "-i" in FsShell that prints files' or > directories' inode id. > h2. Implementation > h3. For hdfs:// (HDFS) > fileId exists in HdfsLocatedFileStatus, which is already returned to > hdfs-client. We just need to print it in Ls#processPath(). > h3. For file:// (Local FS) > h4. Linux > Use java.nio. > h4. Windows > Windows has the concept of "File ID" which is similar to inode id. It is > unique in NTFS and ReFS. > h3. For other FS > The fileId entry will be "0" in FileStatus if it is not set. We could either > ignore or throw an exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nandakumar131 closed pull request #711: HDDS-1368. Cleanup old ReplicationManager code from SCM.
nandakumar131 closed pull request #711: HDDS-1368. Cleanup old ReplicationManager code from SCM. URL: https://github.com/apache/hadoop/pull/711 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] KalmanJantner opened a new pull request #761: HADOOP-13386 Upgrade avro version in Hadoop
KalmanJantner opened a new pull request #761: HADOOP-13386 Upgrade avro version in Hadoop URL: https://github.com/apache/hadoop/pull/761 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #711: HDDS-1368. Cleanup old ReplicationManager code from SCM.
hadoop-yetus commented on issue #711: HDDS-1368. Cleanup old ReplicationManager code from SCM. URL: https://github.com/apache/hadoop/pull/711#issuecomment-485763004 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 0 | Docker mode activated. | | -1 | patch | 7 | https://github.com/apache/hadoop/pull/711 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/711 | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-711/4/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] kittinanasi commented on issue #713: HDDS-1192. Support -conf command line argument in GenericCli
kittinanasi commented on issue #713: HDDS-1192. Support -conf command line argument in GenericCli URL: https://github.com/apache/hadoop/pull/713#issuecomment-485734786 TestOzoneAtRestEncryption test failure does not seem related. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823886#comment-16823886 ] Hadoop QA commented on HADOOP-15616: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 17 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 55s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 17s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-cloud-storage-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 8s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 7s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-cloud-storage-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s{color} | {color:green} hadoop-cos in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s{color} | {color:green} hadoop-cloud-storage-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}120m 15s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HADOOP-15616 | | JIRA Patch URL |
[GitHub] [hadoop] adamantal commented on issue #736: YARN-9469. Fix typo in YarnConfiguration.
adamantal commented on issue #736: YARN-9469. Fix typo in YarnConfiguration. URL: https://github.com/apache/hadoop/pull/736#issuecomment-485720436 Perfect, LGTM (non-binding). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15014) KMS should log the IP address of the clients
[ https://issues.apache.org/jira/browse/HADOOP-15014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823871#comment-16823871 ] Zsombor Gegesy commented on HADOOP-15014: - [~jojochuang], I've just noticed, that the new test class hasn't merged : https://github.com/apache/hadoop/pull/680/files#diff-94f66597949d6c32d70fb0687fd7627f , it is part of the patch file to. Was it intentional? > KMS should log the IP address of the clients > > > Key: HADOOP-15014 > URL: https://issues.apache.org/jira/browse/HADOOP-15014 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Affects Versions: 2.8.1 >Reporter: Zsombor Gegesy >Assignee: Zsombor Gegesy >Priority: Major > Labels: kms, log > Fix For: 3.3.0 > > Attachments: HADOOP-15014.patch > > > Currently KMSMDCFilter only captures http request url and method, but not the > remote address of the client. > Storing this information in a thread local variable would help external > authorizer plugins to do more thorough checks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823857#comment-16823857 ] YangY edited comment on HADOOP-15616 at 4/23/19 8:59 AM: - Considering that more and more Hadoop Users choose COS as an underlying storage system, we do hope to merge it into the official community of Apache Hadoop as soon as possible. Thanks everyone for taking the time to pay attention to this patch again. was (Author: yuyang733): Considering that more and more Hadoop Users choose COS as an underlying storage system. So, we do hope to merge it into the official community of Apache Hadoop as soon as possible. Thanks everyone for taking the time to pay attention to this patch again. > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > HADOOP-15616.009.patch, Tencent-COS-Integrated-v2.pdf, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823857#comment-16823857 ] YangY commented on HADOOP-15616: Considering that more and more Hadoop Users choose COS as an underlying storage system. So, we do hope to merge it into the official community of Apache Hadoop as soon as possible. Thanks everyone for taking the time to pay attention to this patch again. > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > HADOOP-15616.009.patch, Tencent-COS-Integrated-v2.pdf, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2
[ https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823842#comment-16823842 ] Akira Ajisaka commented on HADOOP-16206: Note: In log4j1, the logger is set by "-Dhadoop.root.logger" and which sets both the appender and log level. In contrast, log4j2 cannot set both the appender and log level at a time. In other words, we need to have a property to set the appender and another property to set the log level. Therefore we need to rewrite/update some shell scripts. > Migrate from Log4j1 to Log4j2 > - > > Key: HADOOP-16206 > URL: https://issues.apache.org/jira/browse/HADOOP-16206 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-16206-wip.001.patch > > > This sub-task is to remove log4j1 dependency and add log4j2 dependency. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2
[ https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823772#comment-16823772 ] Akira Ajisaka commented on HADOOP-16206: Attached wip.001 patch * Add -Plog4j2 profile to exclude slf4j-log4j12.jar (slf4j -> log4j1 bridge) from classpath * Add log4j2.properties Notice: * The scope of log4j-slf4j-impl (slf4j -> log4j2 bridge) is set to "provided" in all the modules, to avoid the possible leak of log4j-slf4j-impl jar file to the classpath of downstream projects. * Now log4j-slf4j-impl is not in the classpath but in share/tool/lib directory. I'd like to add the bridge under share/common/lib to include in the classpath, but now I don't know how to do this. > Migrate from Log4j1 to Log4j2 > - > > Key: HADOOP-16206 > URL: https://issues.apache.org/jira/browse/HADOOP-16206 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-16206-wip.001.patch > > > This sub-task is to remove log4j1 dependency and add log4j2 dependency. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YangY updated HADOOP-15616: --- Attachment: (was: HADOOP-15616.009.patch) > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > HADOOP-15616.009.patch, Tencent-COS-Integrated-v2.pdf, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YangY updated HADOOP-15616: --- Attachment: HADOOP-15616.009.patch > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > HADOOP-15616.009.patch, Tencent-COS-Integrated-v2.pdf, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16206) Migrate from Log4j1 to Log4j2
[ https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16206: --- Attachment: HADOOP-16206-wip.001.patch > Migrate from Log4j1 to Log4j2 > - > > Key: HADOOP-16206 > URL: https://issues.apache.org/jira/browse/HADOOP-16206 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-16206-wip.001.patch > > > This sub-task is to remove log4j1 dependency and add log4j2 dependency. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YangY updated HADOOP-15616: --- Attachment: (was: HADOOP-15616.009.patch) > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > HADOOP-15616.009.patch, Tencent-COS-Integrated-v2.pdf, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YangY updated HADOOP-15616: --- Attachment: HADOOP-15616.009.patch > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > HADOOP-15616.009.patch, Tencent-COS-Integrated-v2.pdf, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] chimney-lee opened a new pull request #760: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
chimney-lee opened a new pull request #760: Add order text SPACE in CLI command 'hdfs dfsrouteradmin' URL: https://github.com/apache/hadoop/pull/760 when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot contain SPACE This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-7729) Send back valid HTTP response if user hits IPC port with HTTP GET
[ https://issues.apache.org/jira/browse/HADOOP-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823754#comment-16823754 ] Doris Gu commented on HADOOP-7729: -- h2. I used Nessus to scan my hadoop, and got the following report. I believe it has some relation with this issue, any opinions? Thanks very much! |11409 - ePolicy Orchestrator HTTP GET Request Remote Format String|tcp/50020|Critical| |11801 - HTTP Method Remote Format String|tcp/50020|Critical| |17231 - CERN httpd CGI Name Handling Remote Overflow|tcp/50020|High| |12201 - Web Server HTTP Basic Authorization Header Remote Overflow DoS|tcp/50020|High| |10320 - Web Server Long URL Handling Remote Overflow DoS|tcp/50020|High| |11089 - IBM Tivoli SecureWay WebSEAL Proxy Policy Director Encoded URL DoS|tcp/50020|Medium| |11063 - LabVIEW Web Server HTTP Get Newline DoS|tcp/50020|Medium| |10160 - Nortel Contivity HTTP Server cgiproc Special Character DoS|tcp/50020|Medium| | | | | |11409 - ePolicy Orchestrator HTTP GET Request Remote Format String|tcp/8485|Critical| |11065 - Web Server HTTP Method Handling Remote Overflow|tcp/8485|High| |10496 - IMail Host: Header Field Handling Remote Overflow|tcp/8485|Medium| > Send back valid HTTP response if user hits IPC port with HTTP GET > - > > Key: HADOOP-7729 > URL: https://issues.apache.org/jira/browse/HADOOP-7729 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Affects Versions: 0.23.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Fix For: 2.0.0-alpha > > Attachments: hadoop-7729.txt > > > Often, I've seen users get confused between the IPC ports and HTTP ports for > a daemon. It would be easy for us to detect when an HTTP GET request hits an > IPC port, and instead of sending back garbage, we can send back a valid HTTP > response explaining their mistake. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org