[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256584#comment-16256584 ] Allen Wittenauer commented on HDFS-12711: - Ignoring the hs_err_pid log files is pretty much just sticking our collective heads in the sand about actual, real problems with the unit tests. The unit tests themselves haven't been rock solid for a very long time, even before all of this start happening. Entries have been put into the ignore pile so often that I wouldn't be surprised if the community is already at the point that most developers are ignoring precommit. (e.g., commits with findbugs reported in the issues, javadoc compilation failures being treated as "environmental", etc, etc.) If I were actually paying more attention to day-to-day Hadoop bits these days, I'd probably be ready to disable unit tests (at least HDFS) to specifically avoid the "cried wolf" condition. The rest of the precommit tests work properly the vast majority of the time and are probably more important given the current state of things. (Never mind the massive speed up. QBT is hitting the 15 hour mark for a full run for branch-2 when it is actually allowed to complete.) No one seems to actually care that the unit tests are a broken mess and I doubt they'd be missed. My goal here was to prevent Hadoop from bringing down the rest of the ASF build infrastructure. It's under enough stress without this project making things that much worse. Achievement unlocked and other Yetus users will pick up those new safety features in the next release. I should probably close this JIRA issue. Unless someone else plans to spend some effort on these bugs? At least at this point in time, I view my work here as complete. Also: {code} /build/ {code} ARGH. That hasn't been valid since Hadoop used ant. A great example of "well, if we ignore it, it doesn't exist, right?" Because anything that is still using /build/ almost certainly isn't safe for parallel tests and likely contributing to a whole host of problems. > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6973) DFSClient does not closing a closed socket resulting in thousand of CLOSE_WAIT sockets
[ https://issues.apache.org/jira/browse/HDFS-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256555#comment-16256555 ] yaolong zhu commented on HDFS-6973: --- [~robreeves] Hi Rob, I found the root cause of this issue which lies in the close method of ParquetFileReader. @Override public void close() throws IOException { try { if (f != null) { f.close(); } } finally { if (codecFactory != null) { codecFactory.release(); } } } The f.close() is actually calling the close() method of InputStream which is an empty method rather than H2SeekableInputStream or H1SeekableInputStream. So I update this close method to @Override public void close() throws IOException { try { if (f != null) { if(f instanceof H2SeekableInputStream) { ((H2SeekableInputStream)f).close(); } else if(f instanceof H1SeekableInputStream) { ((H1SeekableInputStream)f).close(); } else { f.close(); } } } finally { if (codecFactory != null) { codecFactory.release(); } } } And the problem is solved. > DFSClient does not closing a closed socket resulting in thousand of > CLOSE_WAIT sockets > -- > > Key: HDFS-6973 > URL: https://issues.apache.org/jira/browse/HDFS-6973 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.4.0 > Environment: RHEL 6.3 -HDP 2.1 -6 RegionServers/Datanode -18T per > node -3108Regions >Reporter: steven xu > > HBase as HDFS Client dose not close a dead connection with the datanode. > This resulting in over 30K+ CLOSE_WAIT and at some point HBase can not > connect to the datanode because too many mapped sockets from one host to > another on the same port:50010. > After I restart all RSs, the count of CLOSE_WAIT will increase always. > $ netstat -an|grep CLOSE_WAIT|wc -l > 2545 > netstat -nap|grep CLOSE_WAIT|grep 6569|wc -l > 2545 > ps -ef|grep 6569 > hbase 6569 6556 21 Aug25 ? 09:52:33 /opt/jdk1.6.0_25/bin/java > -Dproc_regionserver -XX:OnOutOfMemoryError=kill -9 %p -Xmx1000m > -XX:+UseConcMarkSweepGC > I aslo have reviewed these issues: > [HDFS-5697] > [HDFS-5671] > [HDFS-1836] > [HBASE-9393] > I found in HBase 0.98/Hadoop 2.4.0 source codes of these patchs have been > added. > But I donot understand why HBase 0.98/Hadoop 2.4.0 also have this isssue. > Please check. Thanks a lot. > These codes have been added into > BlockReaderFactory.getRemoteBlockReaderFromTcp(). Another bug maybe lead my > problem, > {code:title=BlockReaderFactory.java|borderStyle=solid} > // Some comments here > private BlockReader getRemoteBlockReaderFromTcp() throws IOException { > if (LOG.isTraceEnabled()) { > LOG.trace(this + ": trying to create a remote block reader from a " + > "TCP socket"); > } > BlockReader blockReader = null; > while (true) { > BlockReaderPeer curPeer = null; > Peer peer = null; > try { > curPeer = nextTcpPeer(); > if (curPeer == null) break; > if (curPeer.fromCache) remainingCacheTries--; > peer = curPeer.peer; > blockReader = getRemoteBlockReader(peer); > return blockReader; > } catch (IOException ioe) { > if (isSecurityException(ioe)) { > if (LOG.isTraceEnabled()) { > LOG.trace(this + ": got security exception while constructing " + > "a remote block reader from " + peer, ioe); > } > throw ioe; > } > if ((curPeer != null) && curPeer.fromCache) { > // Handle an I/O error we got when using a cached peer. These are > // considered less serious, because the underlying socket may be > // stale. > if (LOG.isDebugEnabled()) { > LOG.debug("Closed potentially stale remote peer " + peer, ioe); > } > } else { > // Handle an I/O error we got when using a newly created peer. > LOG.warn("I/O error constructing remote block reader.", ioe); > throw ioe; > } > } finally { > if (blockReader == null) { > IOUtils.cleanup(LOG, peer); > } > } > } > return null; > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12822) HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: Throttle is too permissive
[ https://issues.apache.org/jira/browse/HDFS-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guangming Zhang updated HDFS-12822: --- Description: Description: Hi, When I ran the HDFS unit test and got a failure in TestDirectoryScanner.java test case : TestDirectoryScanner.testThrottling:624 Throttle is too permissive detail: Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 227.046 sec <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner) Time elapsed: 198.014 sec <<< FAILURE! java.lang.AssertionError: Throttle is too permissive at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624) And below is the failure part of source code TestDirectoryScanner.java: {code:java} ... while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) { scanner = new DirectoryScanner(dataNode, fds, conf); ratio = runThrottleTest(blocks); retries -= 1; } // Waiting should be about 9x running. LOG.info("RATIO: " + ratio); assertTrue("Throttle is too restrictive", ratio <= 10f); assertTrue("Throttle is too permissive", ratio >= 7f); private float runThrottleTest(int blocks) throws IOException { scanner.setRetainDiffs(true); scan(blocks, 0, 0, 0, 0, 0); scanner.shutdown(); assertFalse(scanner.getRunStatus()); return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get(); } . {code} The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the code thrown out an assertTrue failure. My questions are: 1. Why the ratio was set between 7f and 10f, is it a empirical value? 2. The ratio is smaller than 7f in AArch64 platform, is this value within normal range? Could anyone help? Thanks a lot. was: Description: Hi, When I ran the HDFS unit test and got a failure in TestDirectoryScanner.java test case : TestDirectoryScanner.testThrottling:624 Throttle is too permissive detail: Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 227.046 sec <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner) Time elapsed: 198.014 sec <<< FAILURE! java.lang.AssertionError: Throttle is too permissive at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624) And below is the failure part of source code TestDirectoryScanner.java: {code:java} ... while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) { scanner = new DirectoryScanner(dataNode, fds, conf); ratio = runThrottleTest(blocks); retries -= 1; } // Waiting should be about 9x running. LOG.info("RATIO: " + ratio); assertTrue("Throttle is too restrictive", ratio <= 10f); assertTrue("Throttle is too permissive", ratio >= 7f); private float runThrottleTest(int blocks) throws IOException { scanner.setRetainDiffs(true); scan(blocks, 0, 0, 0, 0, 0); scanner.shutdown(); assertFalse(scanner.getRunStatus()); return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get(); } . {code} The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the code thrown out an assertTrue failure. My questions are: 1. Why the ratio was set between 7f and 10f, is it a empirical value? 2. The ratio is smaller than 7f in AArch64 platform, is this value within normal range? Could anyone help? Thanks a lot. > HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: > Throttle is too permissive > -- > > Key: HDFS-12822 > URL: https://issues.apache.org/jira/browse/HDFS-12822 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects
[jira] [Updated] (HDFS-12822) HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: Throttle is too permissive
[ https://issues.apache.org/jira/browse/HDFS-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guangming Zhang updated HDFS-12822: --- Description: Description: Hi, When I ran the HDFS unit test and got a failure in TestDirectoryScanner.java test case : TestDirectoryScanner.testThrottling:624 Throttle is too permissive detail: Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 227.046 sec <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner) Time elapsed: 198.014 sec <<< FAILURE! java.lang.AssertionError: Throttle is too permissive at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624) And below is the failure part of source code TestDirectoryScanner.java: ... {quote} while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) { scanner = new DirectoryScanner(dataNode, fds, conf); ratio = runThrottleTest(blocks); retries -= 1; } // Waiting should be about 9x running. LOG.info("RATIO: " + ratio); assertTrue("Throttle is too restrictive", ratio <= 10f); assertTrue("Throttle is too permissive", ratio >= 7f); private float runThrottleTest(int blocks) throws IOException { scanner.setRetainDiffs(true); scan(blocks, 0, 0, 0, 0, 0); scanner.shutdown(); assertFalse(scanner.getRunStatus()); return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get(); } .{quote} The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the code thrown out an assertTrue failure. My questions are: 1. Why the ratio was set between 7f and 10f, is it a empirical value? 2. The ratio is smaller than 7f in AArch64 platform, is this value within normal range? Could anyone help? Thanks a lot. was: Description: Hi, When I ran the HDFS unit test and got a failure in TestDirectoryScanner.java test case : TestDirectoryScanner.testThrottling:624 Throttle is too permissive detail: Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 227.046 sec <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner) Time elapsed: 198.014 sec <<< FAILURE! java.lang.AssertionError: Throttle is too permissive at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624) And below is the failure part of source code TestDirectoryScanner.java: ... while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) { scanner = new DirectoryScanner(dataNode, fds, conf); ratio = runThrottleTest(blocks); retries -= 1; } // Waiting should be about 9x running. LOG.info("RATIO: " + ratio); assertTrue("Throttle is too restrictive", ratio <= 10f); assertTrue("Throttle is too permissive", ratio >= 7f); private float runThrottleTest(int blocks) throws IOException { scanner.setRetainDiffs(true); scan(blocks, 0, 0, 0, 0, 0); scanner.shutdown(); assertFalse(scanner.getRunStatus()); return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get(); } . The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the code thrown out an assertTrue failure. My questions are: 1. Why the ratio was set between 7f and 10f, is it a empirical value? 2. The ratio is smaller than 7f in AArch64 platform, is this value within normal range? Could anyone help? Thanks a lot. > HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: > Throttle is too permissive > -- > > Key: HDFS-12822 > URL: https://issues.apache.org/jira/browse/HDFS-12822 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.
[jira] [Commented] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
[ https://issues.apache.org/jira/browse/HDFS-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256518#comment-16256518 ] Hadoop QA commented on HDFS-12830: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 32s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}153m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestPread | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.fs.TestUnbuffer | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.ozone.TestOzoneConfigurationFields | | Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeys | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b | | JIRA Issue | HDFS-12830 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12898121/HDFS-12830-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b8fa57019fa0 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 87a195b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22128/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results
[jira] [Commented] (HDFS-12822) HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: Throttle is too permissive
[ https://issues.apache.org/jira/browse/HDFS-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256499#comment-16256499 ] Eugene Xie commented on HDFS-12822: --- That puzzles me as well. How came the expected ratio? > HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: > Throttle is too permissive > -- > > Key: HDFS-12822 > URL: https://issues.apache.org/jira/browse/HDFS-12822 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.1.0 > Environment: ARMv8 AArch64, Ubuntu16.04 >Reporter: Guangming Zhang >Priority: Minor > Labels: dtest, easyfix, maven, test > Original Estimate: 168h > Remaining Estimate: 168h > > Description: Hi, When I ran the HDFS unit test and got a failure in > TestDirectoryScanner.java test case : > TestDirectoryScanner.testThrottling:624 Throttle is too permissive > detail: > Running > org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner > Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time > elapsed: 227.046 sec <<< FAILURE! - in > org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner > > testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner) > Time elapsed: 198.014 sec <<< FAILURE! > java.lang.AssertionError: Throttle is too permissive > at > org.junit.Assert.fail(Assert.java:88) > at > org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624) > And below is the failure part of source code TestDirectoryScanner.java: > ... > while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) { > scanner = new DirectoryScanner(dataNode, fds, conf); > ratio = runThrottleTest(blocks); > retries -= 1; > } > // Waiting should be about 9x running. > LOG.info("RATIO: " + ratio); > assertTrue("Throttle is too restrictive", ratio <= 10f); > assertTrue("Throttle is too permissive", ratio >= 7f); > > private float runThrottleTest(int blocks) throws IOException { > scanner.setRetainDiffs(true); > scan(blocks, 0, 0, 0, 0, 0); > scanner.shutdown(); > assertFalse(scanner.getRunStatus()); > return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get(); > } > . > The ratio in my test is 6.0578866, which is smaller than 7f in the code. So > the code thrown out an assertTrue failure. > My questions are: > 1. Why the ratio was set between 7f and 10f, is it a empirical value? >2. The ratio is smaller than 7f in AArch64 platform, is this value > within normal range? > Could anyone help? Thanks a lot. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12822) HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: Throttle is too permissive
[ https://issues.apache.org/jira/browse/HDFS-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guangming Zhang updated HDFS-12822: --- Description: Description: Hi, When I ran the HDFS unit test and got a failure in TestDirectoryScanner.java test case : TestDirectoryScanner.testThrottling:624 Throttle is too permissive detail: Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 227.046 sec <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner) Time elapsed: 198.014 sec <<< FAILURE! java.lang.AssertionError: Throttle is too permissive at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624) And below is the failure part of source code TestDirectoryScanner.java: {code:java} ... while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) { scanner = new DirectoryScanner(dataNode, fds, conf); ratio = runThrottleTest(blocks); retries -= 1; } // Waiting should be about 9x running. LOG.info("RATIO: " + ratio); assertTrue("Throttle is too restrictive", ratio <= 10f); assertTrue("Throttle is too permissive", ratio >= 7f); private float runThrottleTest(int blocks) throws IOException { scanner.setRetainDiffs(true); scan(blocks, 0, 0, 0, 0, 0); scanner.shutdown(); assertFalse(scanner.getRunStatus()); return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get(); } . {code} The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the code thrown out an assertTrue failure. My questions are: 1. Why the ratio was set between 7f and 10f, is it a empirical value? 2. The ratio is smaller than 7f in AArch64 platform, is this value within normal range? Could anyone help? Thanks a lot. was: Description: Hi, When I ran the HDFS unit test and got a failure in TestDirectoryScanner.java test case : TestDirectoryScanner.testThrottling:624 Throttle is too permissive detail: Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 227.046 sec <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner) Time elapsed: 198.014 sec <<< FAILURE! java.lang.AssertionError: Throttle is too permissive at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624) And below is the failure part of source code TestDirectoryScanner.java: ... {quote} while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) { scanner = new DirectoryScanner(dataNode, fds, conf); ratio = runThrottleTest(blocks); retries -= 1; } // Waiting should be about 9x running. LOG.info("RATIO: " + ratio); assertTrue("Throttle is too restrictive", ratio <= 10f); assertTrue("Throttle is too permissive", ratio >= 7f); private float runThrottleTest(int blocks) throws IOException { scanner.setRetainDiffs(true); scan(blocks, 0, 0, 0, 0, 0); scanner.shutdown(); assertFalse(scanner.getRunStatus()); return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get(); } .{quote} The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the code thrown out an assertTrue failure. My questions are: 1. Why the ratio was set between 7f and 10f, is it a empirical value? 2. The ratio is smaller than 7f in AArch64 platform, is this value within normal range? Could anyone help? Thanks a lot. > HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: > Throttle is too permissive > -- > > Key: HDFS-12822 > URL: https://issues.apache.org/jira/browse/HDFS-12822 > Project: Hadoop HDFS > Issue Type: Test > Components: test >
[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")
[ https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256494#comment-16256494 ] Hadoop QA commented on HDFS-12808: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}176m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestErasureCodingMultipleRacks | | | hadoop.fs.TestUnbuffer | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12808 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12898108/HDFS-12808.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 46ef97827b79 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e182e77 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22127/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Buil
[jira] [Commented] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256452#comment-16256452 ] Hadoop QA commented on HDFS-12778: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 5m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 52s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 11s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 31s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 4s{color} | {color:green} hadoop-fs2img in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}218m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12778 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12898085/HDFS-12778-HDFS-9806.003.patch | | Optional Tests | asflicense compile javac
[jira] [Commented] (HDFS-12813) RequestHedgingProxyProvider can hide Exception thrown from the Namenode for proxy size of 1
[ https://issues.apache.org/jira/browse/HDFS-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256439#comment-16256439 ] Tsz Wo Nicholas Sze commented on HDFS-12813: Patch looks good. However, the existing code does not. Some comments/questions: - Let's have two unwrap methods to handle two different cases -# ExecutionException(InvocationTargetExeption(SomeException)) -# InvocationTargetException(SomeException) - Also, the parameter of these two methods should be ExecutionException or InvocationTargetException instead of Exception. - Pass the unwrapped exception to logProxyException. Then, isStandbyException does not need to unwrap it again. - Question: It seems to me that the code expects either ExecutionException or InvocationTargetException, could we catch either ExecutionException or InvocationTargetException instead of Exception? - Question: the patch changes successfulProxy to lastUsedProxy. Then, getProxy() may return "last unsuccessful proxy". Is it okay? > RequestHedgingProxyProvider can hide Exception thrown from the Namenode for > proxy size of 1 > --- > > Key: HDFS-12813 > URL: https://issues.apache.org/jira/browse/HDFS-12813 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Attachments: HDFS-12813.001.patch, HDFS-12813.002.patch > > > HDFS-11395 fixed the problem where the MultiException thrown by > RequestHedgingProxyProvider was hidden. However when the target proxy size is > 1, then unwrapping is not done for the InvocationTargetException. for target > proxy size of 1, the unwrapping should be done till first level where as for > multiple proxy size, it should be done at 2 levels. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")
[ https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256427#comment-16256427 ] Hadoop QA commented on HDFS-12808: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 5m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.TestUnbuffer | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | | | hadoop.hdfs.TestMaintenanceState | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12808 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12898086/HDFS-12808.00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2c32f80868db 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0987a7b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22125/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/22125/testRepo
[jira] [Updated] (HDFS-12500) Ozone: add logger for oz shell commands and move error stack traces to DEBUG level
[ https://issues.apache.org/jira/browse/HDFS-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12500: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) Committed this to the feature branch. Thanks [~anu] for the review and thanks [~cheersyang] for filling this. > Ozone: add logger for oz shell commands and move error stack traces to DEBUG > level > -- > > Key: HDFS-12500 > URL: https://issues.apache.org/jira/browse/HDFS-12500 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yiqun Lin >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-12500-HDFS-7240.001.patch > > > Per discussion in HDFS-12489 to reduce the verbosity of logs when exception > happens, lets add logger to {{Shell.java}} and move error stack traces to > DEBUG level. > And to track the execution time of oz commands, when logger is added, lets > add a debug log to print the total time a command execution spent. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12500) Ozone: add logger for oz shell commands and move error stack traces to DEBUG level
[ https://issues.apache.org/jira/browse/HDFS-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256408#comment-16256408 ] Yiqun Lin commented on HDFS-12500: -- Thanks for the review, [~anu]. I'd like to let this committed, :). > Ozone: add logger for oz shell commands and move error stack traces to DEBUG > level > -- > > Key: HDFS-12500 > URL: https://issues.apache.org/jira/browse/HDFS-12500 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-12500-HDFS-7240.001.patch > > > Per discussion in HDFS-12489 to reduce the verbosity of logs when exception > happens, lets add logger to {{Shell.java}} and move error stack traces to > DEBUG level. > And to track the execution time of oz commands, when logger is added, lets > add a debug log to print the total time a command execution spent. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
[ https://issues.apache.org/jira/browse/HDFS-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12830: - Status: Patch Available (was: Open) > Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails > - > > Key: HDFS-12830 > URL: https://issues.apache.org/jira/browse/HDFS-12830 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-12830-HDFS-7240.001.patch > > > The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the > feature branch. Stack trace: > {noformat} > 2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR > ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130)) - Get > pipeline call failed. We are not able to find free nodes or operational > pipeline. > 2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN ipc.Server > (Server.java:logException(2721)) - IPC Server handler 7 on 43551, call > Call#679 Retry#0 > org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock > from 172.17.0.2:42671 > java.lang.NullPointerException > at > org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132) > at > org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221) > at > org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190) > at > org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292) > at > org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047) > at > org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107) > at > {noformat} > The warn log {{Get pipeline call failed. We are not able to find free nodes > or operational pipeline.}} is the failed reason. This is broken by the change > in HDFS-12756. It missed resetting datanode num. > {code} > -cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5) > +cluster = new MiniOzoneClassicCluster.Builder(conf) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
[ https://issues.apache.org/jira/browse/HDFS-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12830: - Attachment: HDFS-12830-HDFS-7240.001.patch > Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails > - > > Key: HDFS-12830 > URL: https://issues.apache.org/jira/browse/HDFS-12830 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-12830-HDFS-7240.001.patch > > > The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the > feature branch. Stack trace: > {noformat} > 2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR > ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130)) - Get > pipeline call failed. We are not able to find free nodes or operational > pipeline. > 2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN ipc.Server > (Server.java:logException(2721)) - IPC Server handler 7 on 43551, call > Call#679 Retry#0 > org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock > from 172.17.0.2:42671 > java.lang.NullPointerException > at > org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132) > at > org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221) > at > org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190) > at > org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292) > at > org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047) > at > org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107) > at > {noformat} > The warn log {{Get pipeline call failed. We are not able to find free nodes > or operational pipeline.}} is the failed reason. This is broken by the change > in HDFS-12756. It missed resetting datanode num. > {code} > -cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5) > +cluster = new MiniOzoneClassicCluster.Builder(conf) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
[ https://issues.apache.org/jira/browse/HDFS-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256400#comment-16256400 ] Yiqun Lin commented on HDFS-12830: -- Attach the patch to reset dn number for mini cluster. > Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails > - > > Key: HDFS-12830 > URL: https://issues.apache.org/jira/browse/HDFS-12830 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-12830-HDFS-7240.001.patch > > > The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the > feature branch. Stack trace: > {noformat} > 2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR > ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130)) - Get > pipeline call failed. We are not able to find free nodes or operational > pipeline. > 2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN ipc.Server > (Server.java:logException(2721)) - IPC Server handler 7 on 43551, call > Call#679 Retry#0 > org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock > from 172.17.0.2:42671 > java.lang.NullPointerException > at > org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132) > at > org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221) > at > org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190) > at > org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292) > at > org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047) > at > org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107) > at > {noformat} > The warn log {{Get pipeline call failed. We are not able to find free nodes > or operational pipeline.}} is the failed reason. This is broken by the change > in HDFS-12756. It missed resetting datanode num. > {code} > -cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5) > +cluster = new MiniOzoneClassicCluster.Builder(conf) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
[ https://issues.apache.org/jira/browse/HDFS-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12830: - Description: The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the feature branch. Stack trace: {noformat} 2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130)) - Get pipeline call failed. We are not able to find free nodes or operational pipeline. 2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN ipc.Server (Server.java:logException(2721)) - IPC Server handler 7 on 43551, call Call#679 Retry#0 org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 172.17.0.2:42671 java.lang.NullPointerException at org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132) at org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221) at org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190) at org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292) at org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047) at org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107) at {noformat} The warn log {{Get pipeline call failed. We are not able to find free nodes or operational pipeline.}} is the failed reason. This is broken by the change in HDFS-12756. It missed resetting datanode num. {code} -cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5) +cluster = new MiniOzoneClassicCluster.Builder(conf) {code} was: The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the feature branch. Stack trace: {noformat} 2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130)) - Get pipeline call failed. We are not able to find free nodes or operational pipeline. 2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN ipc.Server (Server.java:logException(2721)) - IPC Server handler 7 on 43551, call Call#679 Retry#0 org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 172.17.0.2:42671 java.lang.NullPointerException at org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132) at org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221) at org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190) at org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292) at org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047) at org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107) at {noformat} The warn log {{Get pipeline call failed. We are not able to find free nodes or operational pipeline.}} is the failed reason. This is broken by the change in HDFS-12756. It didn't reset datanode num and use default value. {code} -cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5) +cluster = new MiniOzoneClassicCluster.Builder(conf) {code} > Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails > - > > Key: HDFS-12830 > URL: https://issues.apache.org/jira/browse/HDFS-12830 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > > The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the > feature branch. Stack trace: > {noformat} > 2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR > ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130)) - Get > pipeline call failed. We are not able to find free nodes or operational > pipeline. > 2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN ipc.Server > (Server.java:logException(2721)) - IPC Server handler 7 on 43551, call > Call#679 Retry#0 > org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock > from 172.17.0.2:42671 > java.lang.NullPointerException > at > org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132) > at > org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221) > at > org.a
[jira] [Created] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
Yiqun Lin created HDFS-12830: Summary: Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails Key: HDFS-12830 URL: https://issues.apache.org/jira/browse/HDFS-12830 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Yiqun Lin Assignee: Yiqun Lin The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the feature branch. Stack trace: {noformat} 2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130)) - Get pipeline call failed. We are not able to find free nodes or operational pipeline. 2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN ipc.Server (Server.java:logException(2721)) - IPC Server handler 7 on 43551, call Call#679 Retry#0 org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 172.17.0.2:42671 java.lang.NullPointerException at org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132) at org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221) at org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190) at org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292) at org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047) at org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107) at {noformat} The warn log {{Get pipeline call failed. We are not able to find free nodes or operational pipeline.}} is the failed reason. This is broken by the change in HDFS-12756. It didn't reset datanode num and use default value. {code} -cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5) +cluster = new MiniOzoneClassicCluster.Builder(conf) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256388#comment-16256388 ] Hadoop QA commented on HDFS-12823: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.7 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 49s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} branch-2.7 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 822 unchanged - 0 fixed = 824 total (was 822) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 0s{color} | {color:red} The patch generated 131 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}116m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Unreaped Processes | hadoop-hdfs:20 | | Failed junit tests | hadoop.hdfs.server.namenode.TestEditLogJournalFailures | | | hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens | | | hadoop.hdfs.TestBlockMissingException | | Timed out junit tests | org.apache.hadoop.hdfs.TestModTime | | | org.apache.hadoop.hdfs.TestWriteRead | | | org.apache.hadoop.hdfs.TestSetrepIncreasing | | | org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | org.apache.hadoop.hdfs.TestFileCreation | | | org.apache.hadoop.hdfs.TestFileAppend | | | org.apache.hadoop.hdfs.TestPread | | | org.apache.hadoop.hdfs.TestDFSFinalize | | | org.apache.hadoop.hdfs.TestDecommission | | | org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS | | | org.apache.hadoop.hdfs.TestDFSRemove | | | org.apache.hadoop.hdfs.TestLocalDFS | | | org.apache.hadoop.hdfs.TestLease | | | org.apache.hadoop.hdfs.TestRenameWhileOpen | | | org.apache.hadoop.hdfs.TestFSOutputSummer | | | org.apache.hadoop.hdfs.TestBlockReaderFactory | | | org.apache.hadoop.hdfs.TestPersistBlocks | | | org.apache.hadoop.hdfs.TestGetBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:67e87c9 | | JIRA Issue | HDFS-12823 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attach
[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256368#comment-16256368 ] Erik Krogen commented on HDFS-12711: Thanks Sean. Agreed that it is not really a big issue but it does make it more likely for a developer to miss an actual license violation (a "QA bot cried wolf" situation). It seems maybe it would make more sense for the {{hs_err_pid*.log}} files to appear in an already-excluded area, like within {{/build/}}, to represent their transient nature. I assume their location should be configurable in some way? > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDFS-12827. --- Resolution: Not A Problem Assignee: Bharat Viswanadham > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Assignee: Bharat Viswanadham >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")
[ https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256362#comment-16256362 ] Bharat Viswanadham commented on HDFS-12808: --- [~goiri] Thanks for review. Uploaded patch v01 to address review comments. > Add LOG.isDebugEnabled() guard for LOG.debug("...") > --- > > Key: HDFS-12808 > URL: https://issues.apache.org/jira/browse/HDFS-12808 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Mehran Hassani >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12808.00.patch, HDFS-12808.01.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. In this > file, there is a debug level logging statement containing multiple string > concatenation without the if statement before them: > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java, > LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags > + ")");, 82 > Would you be interested in adding the if, to the logging statement? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")
[ https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12808: -- Attachment: HDFS-12808.01.patch > Add LOG.isDebugEnabled() guard for LOG.debug("...") > --- > > Key: HDFS-12808 > URL: https://issues.apache.org/jira/browse/HDFS-12808 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Mehran Hassani >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12808.00.patch, HDFS-12808.01.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. In this > file, there is a debug level logging statement containing multiple string > concatenation without the if statement before them: > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java, > LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags > + ")");, 82 > Would you be interested in adding the if, to the logging statement? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-12638: --- Target Version/s: 2.8.3 (was: 3.1.0) > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang >Priority: Blocker > Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, > OphanBlocksAfterTruncateDelete.jpg > > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256324#comment-16256324 ] Manoj Govindassamy commented on HDFS-12823: --- v02 LGTM, +1. Thanks [~xkrogen]. > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch, HDFS-12823-branch-2.7.002.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256323#comment-16256323 ] Anu Engineer commented on HDFS-7240: bq. Thanks for organizing community meeting(s). Hope there will be a deep-dive into Ozone impl, as it may take a long time to go through the code on your own. I will be happy to do it. bq. Anything on Ozone security design? We are working on a design, we will post it soon. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, > HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, > HDFS-7240.004.patch, HDFS-7240.005.patch, HDFS-7240.006.patch, > MeetingMinutes.pdf, Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, > ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256317#comment-16256317 ] Sean Busbey commented on HDFS-12711: Personally, I think we can rely on committers to examine the output and disregard license violation notifications on dumpfiles. However, if we want to remove the false positive we could update [the current list of RAT plugin exclusions|https://github.com/apache/hadoop/blob/trunk/pom.xml#L377] it'd be something like: {code} ... org.apache.rat apache-rat-plugin .gitattributes .gitignore .git/** .idea/** **/build/** **/patchprocess/** **/*.js **/hs_err_pid*.log ... {code} (as an aside excluding all javascript files seems unwisely broad, especially given the substantial size of the YARN UI module at this point.) > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256314#comment-16256314 ] Konstantin Shvachko commented on HDFS-7240: --- Thanks for organizing community meeting(s). Hope there will be a deep-dive into Ozone impl, as it may take a long time to go through the code on your own. Would be good to give people some time to review the code before starting the vote. *Anything on Ozone security design?* > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, > HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, > HDFS-7240.004.patch, HDFS-7240.005.patch, HDFS-7240.006.patch, > MeetingMinutes.pdf, Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, > ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12638: --- Priority: Blocker (was: Critical) > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang >Priority: Blocker > Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, > OphanBlocksAfterTruncateDelete.jpg > > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256311#comment-16256311 ] Konstantin Shvachko edited comment on HDFS-7240 at 11/17/17 1:56 AM: - ??How does this align with the router-based federation HDFS-10467??? Hey [~ywskycn], router-based federation (in fact all federation approaches) are orthogonal to distributed NN. One should be able to run RBF over multiple HDFS clusters, potentially having different versions. was (Author: shv): ?? How does this align with the router-based federation HDFS-10467? ?? Hey [~ywskycn], router-based federation (in fact all federation approaches) are orthogonal to distributed NN. One should be able to run RBF over multiple HDFS clusters, potentially having different versions. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, > HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, > HDFS-7240.004.patch, HDFS-7240.005.patch, HDFS-7240.006.patch, > MeetingMinutes.pdf, Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, > ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256311#comment-16256311 ] Konstantin Shvachko commented on HDFS-7240: --- ?? How does this align with the router-based federation HDFS-10467? ?? Hey [~ywskycn], router-based federation (in fact all federation approaches) are orthogonal to distributed NN. One should be able to run RBF over multiple HDFS clusters, potentially having different versions. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, > HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, > HDFS-7240.004.patch, HDFS-7240.005.patch, HDFS-7240.006.patch, > MeetingMinutes.pdf, Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, > ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12623) Add UT for the Test Command
[ https://issues.apache.org/jira/browse/HDFS-12623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] legend updated HDFS-12623: -- Resolution: Auto Closed Status: Resolved (was: Patch Available) > Add UT for the Test Command > --- > > Key: HDFS-12623 > URL: https://issues.apache.org/jira/browse/HDFS-12623 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.1.0 >Reporter: legend > Attachments: HDFS-12623.001.patch, HDFS-12623.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256295#comment-16256295 ] Erik Krogen commented on HDFS-12711: Yeah so although we obviously need to fix the unit tests, the license checker also shouldn't be picking up those temp output files in the meantime, right? > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256293#comment-16256293 ] Konstantin Shvachko commented on HDFS-12638: I think it's a blocker for all branches 2.8 and up. Even just removing that line {{toDelete.delete();}} would prevent crashing NameNode. > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang >Priority: Critical > Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, > OphanBlocksAfterTruncateDelete.jpg > > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256283#comment-16256283 ] Allen Wittenauer commented on HDFS-12711: - It's probably also worth pointing out that those files also represent tests that weren't actually executed. So they aren't recorded in the fail/success output. > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256275#comment-16256275 ] Allen Wittenauer commented on HDFS-12711: - Those files are the stack dumps from the unit tests that ran out of resources. Fix the unit tests, those files go away. > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256271#comment-16256271 ] Hadoop QA commented on HDFS-12681: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 18s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 8s{color} | {color:orange} root: The patch generated 19 new + 410 unchanged - 6 fixed = 429 total (was 416) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 25s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 24s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 21s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 51s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}270m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.getLocalNameInBytes() may expose internal representation by returning HdfsLocatedFileStatus.uPath At HdfsLocatedFileStatus.java:by returning HdfsLocatedFileStatus.u
[jira] [Comment Edited] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled
[ https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256263#comment-16256263 ] lufei edited comment on HDFS-12087 at 11/17/17 1:18 AM: This problem is fixed by anyone.So please close this issue,thanks. was (Author: figo): This problem is already fixed.Please close this issue. > The error message is not friendly when set a path with the policy not enabled > - > > Key: HDFS-12087 > URL: https://issues.apache.org/jira/browse/HDFS-12087 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1, 3.0.0-alpha3 >Reporter: lufei >Assignee: lufei > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12087.001.patch > > > First user add a policy by -addPolicies command but not enabled, then user > set a path with this policy. The error message displayed as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color} > The policy 'XOR-2-1-128k' is added by user but not be enabled.The error > message is not promot user to enable the policy first.I think the error > message may be better as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable > the policy by '-enablePolicy' EC command before.{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256269#comment-16256269 ] Hudson commented on HDFS-12801: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13251 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13251/]) HDFS-12801. RBF: Set MountTableResolver as default file resolver. (inigoiri: rev e182e777947a85943504a207deb3cf3ffc047910) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Fix For: 3.1.0 > > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled
[ https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lufei updated HDFS-12087: - Status: Open (was: Patch Available) > The error message is not friendly when set a path with the policy not enabled > - > > Key: HDFS-12087 > URL: https://issues.apache.org/jira/browse/HDFS-12087 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-alpha3, 3.0.0-beta1 >Reporter: lufei >Assignee: lufei > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12087.001.patch > > > First user add a policy by -addPolicies command but not enabled, then user > set a path with this policy. The error message displayed as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color} > The policy 'XOR-2-1-128k' is added by user but not be enabled.The error > message is not promot user to enable the policy first.I think the error > message may be better as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable > the policy by '-enablePolicy' EC command before.{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")
[ https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256267#comment-16256267 ] Íñigo Goiri commented on HDFS-12808: The change LGTM. The style for the {{Logger}} is a little ugly, I'd prefer: {code} private static final Logger LOG = LoggerFactory.getLogger(TestCachingStrategy.class); {code} BTW, just add new patch files and leave the old ones. > Add LOG.isDebugEnabled() guard for LOG.debug("...") > --- > > Key: HDFS-12808 > URL: https://issues.apache.org/jira/browse/HDFS-12808 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Mehran Hassani >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12808.00.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. In this > file, there is a debug level logging statement containing multiple string > concatenation without the if statement before them: > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java, > LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags > + ")");, 82 > Would you be interested in adding the if, to the logging statement? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256265#comment-16256265 ] Suri babu Nuthalapati commented on HDFS-12827: -- Thank you, I will mark it as resolved. Suri > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256265#comment-16256265 ] Suri babu Nuthalapati edited comment on HDFS-12827 at 11/17/17 1:17 AM: Thank you, you can mark it as resolved. Suri was (Author: surinuthalap...@live.com): Thank you, I will mark it as resolved. Suri > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled
[ https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256263#comment-16256263 ] lufei edited comment on HDFS-12087 at 11/17/17 1:16 AM: This problem is already fixed.Please close this issue. was (Author: figo): This problem is already fixed. > The error message is not friendly when set a path with the policy not enabled > - > > Key: HDFS-12087 > URL: https://issues.apache.org/jira/browse/HDFS-12087 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1, 3.0.0-alpha3 >Reporter: lufei >Assignee: lufei > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12087.001.patch > > > First user add a policy by -addPolicies command but not enabled, then user > set a path with this policy. The error message displayed as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color} > The policy 'XOR-2-1-128k' is added by user but not be enabled.The error > message is not promot user to enable the policy first.I think the error > message may be better as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable > the policy by '-enablePolicy' EC command before.{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled
[ https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lufei updated HDFS-12087: - Fix Version/s: 3.0.0-beta1 This problem is already fixed. > The error message is not friendly when set a path with the policy not enabled > - > > Key: HDFS-12087 > URL: https://issues.apache.org/jira/browse/HDFS-12087 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1, 3.0.0-alpha3 >Reporter: lufei >Assignee: lufei > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12087.001.patch > > > First user add a policy by -addPolicies command but not enabled, then user > set a path with this policy. The error message displayed as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color} > The policy 'XOR-2-1-128k' is added by user but not be enabled.The error > message is not promot user to enable the policy first.I think the error > message may be better as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable > the policy by '-enablePolicy' EC command before.{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled
[ https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lufei updated HDFS-12087: - Affects Version/s: 3.0.0-beta1 > The error message is not friendly when set a path with the policy not enabled > - > > Key: HDFS-12087 > URL: https://issues.apache.org/jira/browse/HDFS-12087 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1, 3.0.0-alpha3 >Reporter: lufei >Assignee: lufei > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-12087.001.patch > > > First user add a policy by -addPolicies command but not enabled, then user > set a path with this policy. The error message displayed as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color} > The policy 'XOR-2-1-128k' is added by user but not be enabled.The error > message is not promot user to enable the policy first.I think the error > message may be better as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable > the policy by '-enablePolicy' EC command before.{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12801: --- Labels: RBF (was: ) > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Fix For: 3.1.0 > > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12801: --- Fix Version/s: 3.1.0 > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Fix For: 3.1.0 > > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12801: --- Target Version/s: 2.9.0, 3.0.0 (was: 3.1.0) > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Fix For: 3.1.0 > > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12801: --- Resolution: Fixed Hadoop Flags: Reviewed Target Version/s: 3.1.0 Status: Resolved (was: Patch Available) > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256239#comment-16256239 ] Íñigo Goiri commented on HDFS-12801: Thanks for the feedback [~chris.douglas] and [~ywskycn]. I don't expect any new of the current feature to break any functionality. I'll commit this one to trunk and target 3.1. I could backport to branch-3 (or even branch-2) if there is interest. Thanks for the review [~hanishakoneru], [~ywskycn] and [~chris.douglas]. > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-12823: --- Attachment: HDFS-12823-branch-2.7.002.patch > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch, HDFS-12823-branch-2.7.002.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-12823: --- Attachment: (was: HDFS-12823-branch-2.7.002.patch) > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-12823: --- Attachment: HDFS-12823-branch-2.7.002.patch > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch, HDFS-12823-branch-2.7.002.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256217#comment-16256217 ] Erik Krogen commented on HDFS-12823: - The license issues are false and I believe caused by HDFS-12711; I left a [comment there|https://issues.apache.org/jira/browse/HDFS-12711?focusedCommentId=16256166&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16256166] - Two checkstyle issues are caused by long static import lines; nothing I can do about it - Fixed the other three checkstyle issues; these came from matching my code to existing nearby code but I think in the same spirit as the v000 to v001 patch change it's better to just follow proper conventions - Most of the patch whitespace is invalid, it's calling out lines in hdfs-default I did not modify... One line was my fault - The tests are passing fine locally, think the numerous failures and timeouts are just due to the generic problems the HDFS unit tests are having currently Attaching v002 patch with modifications as described above > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Status: Patch Available (was: Open) > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")
[ https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256198#comment-16256198 ] Bharat Viswanadham commented on HDFS-12808: --- [~busbey] [~goiri] Updated to use slf4j. Created a task HDFS-12829 to update in other modules in hdfs > Add LOG.isDebugEnabled() guard for LOG.debug("...") > --- > > Key: HDFS-12808 > URL: https://issues.apache.org/jira/browse/HDFS-12808 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Mehran Hassani >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12808.00.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. In this > file, there is a debug level logging statement containing multiple string > concatenation without the if statement before them: > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java, > LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags > + ")");, 82 > Would you be interested in adding the if, to the logging statement? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")
[ https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12808: -- Status: Patch Available (was: Open) > Add LOG.isDebugEnabled() guard for LOG.debug("...") > --- > > Key: HDFS-12808 > URL: https://issues.apache.org/jira/browse/HDFS-12808 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Mehran Hassani >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12808.00.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. In this > file, there is a debug level logging statement containing multiple string > concatenation without the if statement before them: > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java, > LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags > + ")");, 82 > Would you be interested in adding the if, to the logging statement? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256197#comment-16256197 ] Hadoop QA commented on HDFS-12823: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.7 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 59s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 41s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} branch-2.7 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 822 unchanged - 0 fixed = 827 total (was 822) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 61 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 12s{color} | {color:red} The patch generated 331 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Unreaped Processes | hadoop-hdfs:18 | | Failed junit tests | hadoop.hdfs.TestClientReportBadBlock | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | Timed out junit tests | org.apache.hadoop.hdfs.TestSetrepDecreasing | | | org.apache.hadoop.hdfs.TestFileAppend4 | | | org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade | | | org.apache.hadoop.hdfs.TestLease | | | org.apache.hadoop.hdfs.TestHDFSServerPorts | | | org.apache.hadoop.hdfs.TestDFSUpgrade | | | org.apache.hadoop.hdfs.web.TestWebHDFS | | | org.apache.hadoop.hdfs.TestAppendSnapshotTruncate | | | org.apache.hadoop.hdfs.TestRenameWhileOpen | | | org.apache.hadoop.hdfs.TestMiniDFSCluster | | | org.apache.hadoop.hdfs.TestBlockReaderFactory | | | org.apache.hadoop.hdfs.TestHFlush | | | org.apache.hadoop.hdfs.TestEncryptedTransfer | | | org.apache.hadoop.hdfs.TestDFSShell | | | org.apache.hadoop.hdfs.TestDataTransferProtocol | | | org.apache.hadoop.hdfs.TestDFSRename | | | org.apache.hadoop.hdfs.TestHDFSTrash | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:67e87c9 | | JIRA Issue | HDFS-12823 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12898064/HDFS-12823-branch-2.7.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsi
[jira] [Created] (HDFS-12829) Moving logging APIs over to slf4j in hdfs
Bharat Viswanadham created HDFS-12829: - Summary: Moving logging APIs over to slf4j in hdfs Key: HDFS-12829 URL: https://issues.apache.org/jira/browse/HDFS-12829 Project: Hadoop HDFS Issue Type: Bug Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")
[ https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12808: -- Attachment: HDFS-12808.00.patch > Add LOG.isDebugEnabled() guard for LOG.debug("...") > --- > > Key: HDFS-12808 > URL: https://issues.apache.org/jira/browse/HDFS-12808 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Mehran Hassani >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12808.00.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. In this > file, there is a debug level logging statement containing multiple string > concatenation without the if statement before them: > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java, > LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags > + ")");, 82 > Would you be interested in adding the if, to the logging statement? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Status: Open (was: Patch Available) > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Attachment: (was: HDFS-12778-HDFS-9806.003.patch) > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Attachment: HDFS-12778-HDFS-9806.003.patch > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256186#comment-16256186 ] Hadoop QA commented on HDFS-12778: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 40s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 19s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 57s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 30s{color} | {color:red} hadoop-fs2img in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 12s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 57s{color} | {color:green} hadoop-fs2img in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}183m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12778 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12898055/HDFS-12778-HDFS-9806.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2502f1a6dbec 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Bui
[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256166#comment-16256166 ] Erik Krogen commented on HDFS-12711: Hey [~aw], in addition to the wild fluctuations in success of HDFS unit tests (not your fault, but unfortunate) I'm seeing lots of false license violations caused by these changes, e.g.: https://builds.apache.org/job/PreCommit-HDFS-Build/22122/artifact/out/patch-asflicense-problems.txt Can we do something to solve that? > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256153#comment-16256153 ] Wei Yan commented on HDFS-12801: Agree with [~chris.douglas], a separate dev branch may not bring much help here, given ongoing tasks will not break what RBF has in trunk currently. > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256146#comment-16256146 ] Hadoop QA commented on HDFS-12823: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.7 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 58s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s{color} | {color:green} branch-2.7 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 822 unchanged - 0 fixed = 827 total (was 822) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 61 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 8s{color} | {color:red} The patch generated 184 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}126m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Unreaped Processes | hadoop-hdfs:20 | | Failed junit tests | hadoop.hdfs.TestListPathServlet | | | hadoop.hdfs.TestDataTransferProtocol | | | hadoop.hdfs.server.datanode.TestDataNodeMXBean | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | | | org.apache.hadoop.hdfs.TestDatanodeRegistration | | | org.apache.hadoop.hdfs.TestDFSClientFailover | | | org.apache.hadoop.hdfs.TestDFSClientRetries | | | org.apache.hadoop.hdfs.web.TestWebHdfsTokens | | | org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream | | | org.apache.hadoop.hdfs.TestFileAppendRestart | | | org.apache.hadoop.hdfs.TestSeekBug | | | org.apache.hadoop.hdfs.TestDFSMkdirs | | | org.apache.hadoop.hdfs.TestDatanodeReport | | | org.apache.hadoop.hdfs.web.TestWebHDFS | | | org.apache.hadoop.hdfs.web.TestWebHDFSXAttr | | | org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes | | | org.apache.hadoop.hdfs.TestMiniDFSCluster | | | org.apache.hadoop.hdfs.TestDistributedFileSystem | | | org.apache.hadoop.hdfs.web.TestWebHDFSForHA | | | org.apache.hadoop.hdfs.TestBalancerBandwidth | | | org.apache.hadoop.hdfs.TestSetTimes | | | org.apache.hadoop.hdfs.TestDFSShell | | | org.apache.hadoop.hdfs.web.TestWebHDFSAcl | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce
[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256140#comment-16256140 ] Chris Douglas commented on HDFS-12801: -- bq. Should I merge this thing into trunk targeting 3.1? Better to have a branch for the full HDFS-12615? Depends on the cadence you want to keep, really. If HDFS-12615 will make monotonic progress (i.e., trunk remains in a releasable state) and each change is self-contained, then a branch doesn't add much value. If RBF will be temporarily broken during HDFS-12615, then best to keep that on a branch. Similar argument if patches would be too large/difficult to review, and HDFS-12615 is best understood when it's ready to merge. The current list of subtasks look fine to commit directly to trunk, IMO. > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256143#comment-16256143 ] Chris Douglas commented on HDFS-12801: -- Also +1 on the patch. > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256137#comment-16256137 ] Manoj Govindassamy commented on HDFS-12823: --- Thanks for the extra efforts [~xkrogen]. Much appreciated. +1, pending Jenkins. > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256118#comment-16256118 ] Bharat Viswanadham edited comment on HDFS-12827 at 11/16/17 11:09 PM: -- [~surinuthalap...@live.com] This is just a documentation issue. The behavior is same across all releases. This has been fixed by HDFS-11833 As 2.5.2 is a released version, I think documentation cannot be updated for already released version. For newer versions, this has been fixed. was (Author: bharatviswa): [~surinuthalap...@live.com] This is just a documentation issue. This has been fixed by HDFS-11833 As 2.5.2 is a released version, I think documentation cannot be updated for already released version. For newer versions, this has been fixed. > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256118#comment-16256118 ] Bharat Viswanadham commented on HDFS-12827: --- [~surinuthalap...@live.com] This is just a documentation issue. This has been fixed by HDFS-11833 As 2.5.2 is a released version, I think documentation cannot be updated for already released version. For newer versions, this has been fixed. > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12828) OIV ReverseXML Processor Fails With Escaped Characters
[ https://issues.apache.org/jira/browse/HDFS-12828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-12828: --- Attachment: fsimage_008.xml > OIV ReverseXML Processor Fails With Escaped Characters > -- > > Key: HDFS-12828 > URL: https://issues.apache.org/jira/browse/HDFS-12828 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.0 >Reporter: Erik Krogen > Attachments: fsimage_008.xml > > > The HDFS OIV ReverseXML processor fails if the XML file contains escaped > characters: > {code} > ekrogen at ekrogen-ld1 in > ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk! > ± $HADOOP_HOME/bin/hdfs dfs -fs hdfs://localhost:9000/ -ls / > Found 4 items > drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:48 /foo > drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:49 /foo" > drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:50 /foo` > drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:49 /foo& > {code} > Then after doing {{saveNamespace}} on that NameNode... > {code} > ekrogen at ekrogen-ld1 in > ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk! > ± $HADOOP_HOME/bin/hdfs oiv -i > /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008 -o > /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -p XML > ekrogen at ekrogen-ld1 in > ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk! > ± $HADOOP_HOME/bin/hdfs oiv -i > /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -o > /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml.rev -p > ReverseXML > OfflineImageReconstructor failed: unterminated entity ref starting with & > org.apache.hadoop.hdfs.util.XMLUtils$UnmanglingError: unterminated entity ref > starting with & > at > org.apache.hadoop.hdfs.util.XMLUtils.unmangleXmlString(XMLUtils.java:232) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:383) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:379) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildren(OfflineImageReconstructor.java:418) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.access$1000(OfflineImageReconstructor.java:95) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$INodeSectionProcessor.process(OfflineImageReconstructor.java:524) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1710) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1765) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:191) > at > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:134) > {code} > See attachments for relevant fsimage XML file. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12828) OIV ReverseXML Processor Fails With Escaped Characters
Erik Krogen created HDFS-12828: -- Summary: OIV ReverseXML Processor Fails With Escaped Characters Key: HDFS-12828 URL: https://issues.apache.org/jira/browse/HDFS-12828 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Affects Versions: 2.8.0 Reporter: Erik Krogen The HDFS OIV ReverseXML processor fails if the XML file contains escaped characters: {code} ekrogen at ekrogen-ld1 in ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk! ± $HADOOP_HOME/bin/hdfs dfs -fs hdfs://localhost:9000/ -ls / Found 4 items drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:48 /foo drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:49 /foo" drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:50 /foo` drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:49 /foo& {code} Then after doing {{saveNamespace}} on that NameNode... {code} ekrogen at ekrogen-ld1 in ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk! ± $HADOOP_HOME/bin/hdfs oiv -i /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008 -o /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -p XML ekrogen at ekrogen-ld1 in ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk! ± $HADOOP_HOME/bin/hdfs oiv -i /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -o /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml.rev -p ReverseXML OfflineImageReconstructor failed: unterminated entity ref starting with & org.apache.hadoop.hdfs.util.XMLUtils$UnmanglingError: unterminated entity ref starting with & at org.apache.hadoop.hdfs.util.XMLUtils.unmangleXmlString(XMLUtils.java:232) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:383) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:379) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildren(OfflineImageReconstructor.java:418) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.access$1000(OfflineImageReconstructor.java:95) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$INodeSectionProcessor.process(OfflineImageReconstructor.java:524) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1710) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1765) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:191) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:134) {code} See attachments for relevant fsimage XML file. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256090#comment-16256090 ] Suri babu Nuthalapati commented on HDFS-12827: -- Thank you for the Response, [~bharatviswa]. Is there a Design change in Hadoop V2 form V1 and V3 or is it just the documentation was misrepresented in v2? If not, Can we update the documentation which is in http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html to reflect correct details. Suri > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256077#comment-16256077 ] Bharat Viswanadham edited comment on HDFS-12827 at 11/16/17 10:38 PM: -- Hi [~surinuthalap...@live.com] In the latest design document, it is mentioned correctly {code:java} when the replication factor is three, HDFS’s placement policy is to put one replica on the local machine if the writer is on a datanode, otherwise on a random datanode, another replica on a node in a different (remote) rack, and the last on a different node in the same remote rack {code} . http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html Pls let me know any more is needed? was (Author: bharatviswa): Hi [~surinuthalap...@live.com] In the latest design document, it is mentioned correctly {code:java} when the replication factor is three, HDFS’s placement policy is to put one replica on the local machine if the writer is on a datanode, otherwise on a random datanode, another replica on a node in a different (remote) rack, and the last on a different node in the same remote rack {code} . Pls let me know any more is needed? > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256077#comment-16256077 ] Bharat Viswanadham commented on HDFS-12827: --- Hi [~surinuthalap...@live.com] In the latest design document, it is mentioned correctly {code:java} when the replication factor is three, HDFS’s placement policy is to put one replica on the local machine if the writer is on a datanode, otherwise on a random datanode, another replica on a node in a different (remote) rack, and the last on a different node in the same remote rack {code} . Pls let me know any more is needed? > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12528) Short-circuit reads unnecessarily disabled for a long time
[ https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HDFS-12528: -- Summary: Short-circuit reads unnecessarily disabled for a long time (was: Short-circuit reads getting disabled frequently in certain scenarios) > Short-circuit reads unnecessarily disabled for a long time > -- > > Key: HDFS-12528 > URL: https://issues.apache.org/jira/browse/HDFS-12528 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client, performance >Affects Versions: 2.6.0 >Reporter: Andre Araujo >Assignee: John Zhuge > Attachments: HDFS-12528.000.patch > > > We have scenarios where data ingestion makes use of the -appendToFile > operation to add new data to existing HDFS files. In these situations, we're > frequently running into the problem described below. > We're using Impala to query the HDFS data with short-circuit reads (SCR) > enabled. After each file read, Impala "unbuffer"'s the HDFS file to reduce > the memory footprint. In some cases, though, Impala still keeps the HDFS file > handle open for reuse. > The "unbuffer" call, however, causes the file's current block reader to be > closed, which makes the associated ShortCircuitReplica evictable from the > ShortCircuitCache. When the cluster is under load, this means that the > ShortCircuitReplica can be purged off the cache pretty fast, which closes the > file descriptor to the underlying storage file. > That means that when Impala re-reads the file it has to re-open the storage > files associated with the ShortCircuitReplica's that were evicted from the > cache. If there were no appends to those blocks, the re-open will succeed > without problems. If one block was appended since the ShortCircuitReplica was > created, the re-open will fail with the following error: > {code} > Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 > not found > {code} > This error is handled as an "unknown response" by the BlockReaderFactory [1], > which disables short-circuit reads for 10 minutes [2] for the client. > These 10 minutes without SCR can have a big performance impact for the client > operations. In this particular case ("Meta file not found") it would suffice > to return null without disabling SCR. This particular block read would fall > back to the normal, non-short-circuited, path and other SCR requests would > continue to work as expected. > It might also be interesting to be able to control how long SCR is disabled > for in the "unknown response" case. 10 minutes seems a bit to long and not > being able to change that is a problem. > [1] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java#L646 > [2] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java#L97 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-12823: --- Attachment: HDFS-12823-branch-2.7.001.patch Fair enough, attached v001 patch with a getter for {{socketSendBufferSize}}. > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256043#comment-16256043 ] Manoj Govindassamy commented on HDFS-12823: --- [~xkrogen], Yes, not a good idea to introduce getters and setters for all those 50+ fields as part of this jira. Adding a getter for the newly added ones will be better though. Otherwise, the v0 patch LGTM, +1. Thanks for working on this. > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suri babu Nuthalapati updated HDFS-12827: - Summary: Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation (was: Need Clarity onReplica Placement: The First Baby Steps in HDFS Architecture documentation) > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12827) Need Clarity onReplica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suri babu Nuthalapati updated HDFS-12827: - Summary: Need Clarity onReplica Placement: The First Baby Steps in HDFS Architecture documentation (was: Update the description about Replica Placement: The First Baby Steps in HDFS Architecture documentation) > Need Clarity onReplica Placement: The First Baby Steps in HDFS Architecture > documentation > - > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12730) Verify open files captured in the snapshots across config disable and enable
[ https://issues.apache.org/jira/browse/HDFS-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255984#comment-16255984 ] Hadoop QA commented on HDFS-12730: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 26s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 58s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 6s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}169m 12s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Unreaped Processes | hadoop-hdfs:1 | | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.fs.TestUnbuffer | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12730 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12898029/HDFS-12730.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bc168f67c8c4 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 61ace17 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Unreaped Processes Log | https://builds.apache.org/job/PreCommit-HDFS-Build/22119/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-reaper.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22119/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/22119/testReport/
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255980#comment-16255980 ] Erik Krogen commented on HDFS-12823: Hi [~manojg], thanks for taking a look! I would love to but that method does not exist in branch-2.7. In the 2.7 branch the fields of {{DFSClient.Conf}} are generally accessed bare; there are 50+ fields and only 4 direct getter methods. > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-12823: --- Status: Patch Available (was: In Progress) > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Status: Patch Available (was: Open) > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Attachment: HDFS-12778-HDFS-9806.003.patch Updated patch fixing the findbugs and checkstyle issues. The failed tests pass locally except {{TestCheckpoint}}, which is unrelated. > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Status: Open (was: Patch Available) > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HDFS-12681: - Attachment: HDFS-12681.12.patch Revised patch. This should fix the unit test failures. Also added a unit test to ensure {{HdfsFileStatus}} remains a superset of {{FileStatus}}. This modifies the approach taken by HDFS-12455 by removing the {{setSnapShotEnabledFlag}} method and exposing {{AttrFlags}}. Frankly, I'm not convinced that exposing all these attribute flags in {{FileStatus}}, when most are only meaningful to HDFS, is valuable. The point is moot since we've already released it, but I hope we can eventually curtail the practice. > Fold HdfsLocatedFileStatus into HdfsFileStatus > -- > > Key: HDFS-12681 > URL: https://issues.apache.org/jira/browse/HDFS-12681 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chris Douglas >Assignee: Chris Douglas >Priority: Minor > Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, > HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, > HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, > HDFS-12681.08.patch, HDFS-12681.09.patch, HDFS-12681.10.patch, > HDFS-12681.11.patch, HDFS-12681.12.patch > > > {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of > {{LocatedFileStatus}}. Conversion requires copying common fields and shedding > unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to > extend {{LocatedFileStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255937#comment-16255937 ] Manoj Govindassamy commented on HDFS-12823: --- [~xkrogen], Can we please make use of {{getSocketSendBufferSize()}} instead of directly referring to the member variable in the below check in {{DFSOutputStream}}? {noformat} 1704if (client.getConf().socketSendBufferSize > 0) { 1705 sock.setSendBufferSize(client.getConf().socketSendBufferSize); 1706} {noformat} > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12827) Update the description about Replica Placement: The First Baby Steps in HDFS Architecture documentation
Suri babu Nuthalapati created HDFS-12827: Summary: Update the description about Replica Placement: The First Baby Steps in HDFS Architecture documentation Key: HDFS-12827 URL: https://issues.apache.org/jira/browse/HDFS-12827 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: Suri babu Nuthalapati Priority: Minor The placement should be this: https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html HDFS’s placement policy is to put one replica on one node in the local rack, another on a node in a different (remote) rack, and the last on a different node in the same remote rack. Hadoop Definitive guide says the same and I have tested and saw the same behavior as above. But the documentation in the versions after r2.5.2 it was mentioned as: http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12500) Ozone: add logger for oz shell commands and move error stack traces to DEBUG level
[ https://issues.apache.org/jira/browse/HDFS-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255873#comment-16255873 ] Anu Engineer commented on HDFS-12500: - [~linyiqun] Thanks for fixing this. Test failures are not related to this patch. I will commit this shortly. [~cheersyang] Thanks for filing this. > Ozone: add logger for oz shell commands and move error stack traces to DEBUG > level > -- > > Key: HDFS-12500 > URL: https://issues.apache.org/jira/browse/HDFS-12500 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-12500-HDFS-7240.001.patch > > > Per discussion in HDFS-12489 to reduce the verbosity of logs when exception > happens, lets add logger to {{Shell.java}} and move error stack traces to > DEBUG level. > And to track the execution time of oz commands, when logger is added, lets > add a debug log to print the total time a command execution spent. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12745) Ozone: XceiverClientManager should cache objects based on pipeline name
[ https://issues.apache.org/jira/browse/HDFS-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255756#comment-16255756 ] Xiaoyu Yao commented on HDFS-12745: --- [~msingh], I take a look at patch v6 and it looks good to me overall. Here are some comments: NIT: XceiverClientStandAlone.java -> XceiverClientStandalone.java and the class name to be consistent. Pipeline.java Line 79: Should we use the OzoneProtos.Pipeline directly to avoid the conversion, which also have a builder to construct pipeline object without worrying about different constructors? ContainerProtocolCalls.java Line 229: agree with [~anu] that we can get the pipeline from a client unless it is removed from the XceiverClientSpi interface and implementation. Can you elaborate on adding the additional parameter for the pipeline? Line 237: should we remove the containerName from protobuf definition for pipeline as a pipeline may be shared by multiple containers? XceiverServerInitializer.java Line 33: NIT: XceiverServerStandAlone->XceiverServerStandalone XceiverServerStandAlone.java Line 45/47/60: same as above PipelineManager.java Line 40: suggest to change activePipelines from private to protect so that subclass can access it to update/close pipeline. Also, how do we protect the thread safe of activePipelines among get/update/close operations? Line 78: should we allow round-robin of existing pipelines if the newNodes is less than the replication factor requested? Line 82/93: the number of parameters in the format string (1) is less than the actual parameters (2), we need a {} after “for container” > Ozone: XceiverClientManager should cache objects based on pipeline name > --- > > Key: HDFS-12745 > URL: https://issues.apache.org/jira/browse/HDFS-12745 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12745-HDFS-7240.001.patch, > HDFS-12745-HDFS-7240.002.patch, HDFS-12745-HDFS-7240.003.patch, > HDFS-12745-HDFS-7240.004.patch, HDFS-12745-HDFS-7240.005.patch, > HDFS-12745-HDFS-7240.006.patch > > > With just the standalone pipeline, a new pipeline was created for each and > every container. > This code can be optimized so that pipelines are craeted less frequently. > Caching using pipeline names will help with Ratis clients as well. > a) Remove Container name from Pipeline object. > b) XceiverClientManager should cache objects based on pipeline name > c) XceiverClient and XceiverServer should be renamed to > XceiverClientStandAlone & XceiverServerRatis > d) StandAlone pipeline should have notion of re-using pipeline objects. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit
[ https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255749#comment-16255749 ] Hadoop QA commented on HDFS-12594: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 51s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 951 unchanged - 0 fixed = 955 total (was 951) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}190m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing.getStartPath() may expose internal representation by returning SnapshotDiffReportListing.startPath At SnapshotDiffReportListing.java:by returning SnapshotDiffReportListing.startPath At SnapshotDiffReportListing.java:[line 162] | | | org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing$DiffRe
[jira] [Commented] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255737#comment-16255737 ] Hadoop QA commented on HDFS-12778: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 28s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 5s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 7s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 17s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 4s{color} | {color:orange} root: The patch generated 3 new + 18 unchanged - 0 fixed = 21 total (was 18) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 41s{color} | {color:red} hadoop-tools/hadoop-fs2img generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 5s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 18s{color} | {color:green} hadoop-fs2img in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}216m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-tools/hadoop-fs2img | | | org.apache.hadoop.hdfs.server.namenode.FixedBlockResolver.BLOCKSIZE_DEFAULT isn't final but should be At FixedBlockResolver.java:be At FixedBlockResolver.java:[line 37] | | Failed junit tests | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker | | | hadoop.hdfs.server.namenode.TestCheckpoint | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestDFSStripedInputStream | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.TestDFSStripedOutputStream | |
[jira] [Updated] (HDFS-12730) Verify open files captured in the snapshots across config disable and enable
[ https://issues.apache.org/jira/browse/HDFS-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12730: -- Attachment: HDFS-12730.02.patch Attached v02 patch to address the comment. -- added a case to verify the config switched on to off and the effect of file lengths for the open files in the newly taken snapshots. [~yzhangal], [~hanishakoneru], can you please take a look? > Verify open files captured in the snapshots across config disable and enable > > > Key: HDFS-12730 > URL: https://issues.apache.org/jira/browse/HDFS-12730 > Project: Hadoop HDFS > Issue Type: Test > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-12730.01.patch, HDFS-12730.02.patch > > > Open files captured in the snapshots have their meta data preserved based on > the config > _dfs.namenode.snapshot.capture.openfiles_ (refer HDFS-11402). During the > upgrade scenario or when the NameNode gets restarted with config turned on or > off, the attributes of the open files captured in the snapshots are > influenced accordingly. Better to have a test case to verify open file > attributes across config turn on and off, and the current expected behavior > with HDFS-11402 so as to catch any regressions in the future. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit
[ https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255468#comment-16255468 ] Shashikant Banerjee edited comment on HDFS-12594 at 11/16/17 3:20 PM: -- Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? <{}(byte[])->byte[][]{null}. Reverse Mapping: byte[][]{null}->byte[]{(byte) ("/") }->String("/"). I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. was (Author: shashikant): Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? <{}(byte[])->byte[][]{null}; Reverse Mapping: byte[][]{null}->byte[]{(byte) ("/") }->String("/"); I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. > SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC > response limit > --- > > Key: HDFS-12594 > URL: https://issues.apache.org/jira/browse/HDFS-12594 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee > Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, > HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, > HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, > SnapshotDiff_Improvemnets .pdf > > > The snapshotDiff command fails if the snapshotDiff report size is larger than > the configuration value of ipc.maximum.response.length which is by default > 128 MB. > Worst case, with all Renames ops in sanpshots each with source and target > name equal to MAX_PATH_LEN which is 8k characters, this would result in at > 8192 renames. > > SnapshotDiff is currently used by distcp to optimize copy operations and in > case of the the diff report exceeding the limit , it fails with the below > exception: > Test set: > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > --- > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport) > Time elapsed: 111.906 sec <<< ERROR! > java.io.IOException: Failed on local exception: > org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; > Host Details : local host is: "hw15685.local/10.200.5.230"; destination host > is: "localhost":59808; > Attached is the proposal for the changes required. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit
[ https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255468#comment-16255468 ] Shashikant Banerjee edited comment on HDFS-12594 at 11/16/17 3:16 PM: -- Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? <{}(byte[])->byte[][]{null}; Reverse Mapping: byte[][]{null}->byte[]{(byte) ("/") }->String("/"); I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. was (Author: shashikant): Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? <{}(byte[])->byte[][]{null}; Reverse Mapping: byte[][]{null}->byte[]{(byte) ("/") }->String("/") I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. > SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC > response limit > --- > > Key: HDFS-12594 > URL: https://issues.apache.org/jira/browse/HDFS-12594 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee > Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, > HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, > HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, > SnapshotDiff_Improvemnets .pdf > > > The snapshotDiff command fails if the snapshotDiff report size is larger than > the configuration value of ipc.maximum.response.length which is by default > 128 MB. > Worst case, with all Renames ops in sanpshots each with source and target > name equal to MAX_PATH_LEN which is 8k characters, this would result in at > 8192 renames. > > SnapshotDiff is currently used by distcp to optimize copy operations and in > case of the the diff report exceeding the limit , it fails with the below > exception: > Test set: > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > --- > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport) > Time elapsed: 111.906 sec <<< ERROR! > java.io.IOException: Failed on local exception: > org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; > Host Details : local host is: "hw15685.local/10.200.5.230"; destination host > is: "localhost":59808; > Attached is the proposal for the changes required. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit
[ https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255468#comment-16255468 ] Shashikant Banerjee edited comment on HDFS-12594 at 11/16/17 3:14 PM: -- Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? <{}(byte[])->byte[][]{null}; Reverse Mapping: byte[][]{null}->byte[]{(byte) ("/") }->String("/") I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. was (Author: shashikant): Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? < {}(byte[]) -> byte[][]{null}; Reverse Mapping: byte[][]{null} -> byte[]{(byte) ("/") } ->String("/") I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. > SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC > response limit > --- > > Key: HDFS-12594 > URL: https://issues.apache.org/jira/browse/HDFS-12594 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee > Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, > HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, > HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, > SnapshotDiff_Improvemnets .pdf > > > The snapshotDiff command fails if the snapshotDiff report size is larger than > the configuration value of ipc.maximum.response.length which is by default > 128 MB. > Worst case, with all Renames ops in sanpshots each with source and target > name equal to MAX_PATH_LEN which is 8k characters, this would result in at > 8192 renames. > > SnapshotDiff is currently used by distcp to optimize copy operations and in > case of the the diff report exceeding the limit , it fails with the below > exception: > Test set: > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > --- > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport) > Time elapsed: 111.906 sec <<< ERROR! > java.io.IOException: Failed on local exception: > org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; > Host Details : local host is: "hw15685.local/10.200.5.230"; destination host > is: "localhost":59808; > Attached is the proposal for the changes required. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org