[jira] [Created] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler
Tao Jie created HADOOP-15121: Summary: Encounter NullPointerException when using DecayRpcScheduler Key: HADOOP-15121 URL: https://issues.apache.org/jira/browse/HADOOP-15121 Project: Hadoop Common Issue Type: Bug Reporter: Tao Jie I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but got excetion in namenode: {code} 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from source DecayRpcSchedulerMetrics2.ipc.8020 java.lang.NullPointerException at org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233) at org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709) at org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685) at org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693) at org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102) at org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76) at org.apache.hadoop.ipc.Server.(Server.java:2612) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678) {code} It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its {{delegate}} field in its Initialization method -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292132#comment-16292132 ] SammiChen commented on HADOOP-15080: I see. Thanks for the explanation [~andrew.wang] . > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Assignee: SammiChen >Priority: Blocker > Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1 > > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15111) AliyunOSS: backport HADOOP-14993 to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15111: --- Resolution: Fixed Assignee: Genmao Yu Status: Resolved (was: Patch Available) > AliyunOSS: backport HADOOP-14993 to branch-2 > > > Key: HADOOP-15111 > URL: https://issues.apache.org/jira/browse/HADOOP-15111 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: 2.10.0, 2.9.1 > > Attachments: HADOOP-15111-branch-2.001.patch > > > Do a bulk listing off all entries under a path in one single operation, there > is no need to recursively walk the directory tree. > Updates: > - override listFiles and listLocatedStatus by using bulk listing > - some minor updates in hadoop-aliyun index.md -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15111) AliyunOSS: backport HADOOP-14993 to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15111: --- Fix Version/s: 2.9.1 2.10.0 > AliyunOSS: backport HADOOP-14993 to branch-2 > > > Key: HADOOP-15111 > URL: https://issues.apache.org/jira/browse/HADOOP-15111 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu > Fix For: 2.10.0, 2.9.1 > > Attachments: HADOOP-15111-branch-2.001.patch > > > Do a bulk listing off all entries under a path in one single operation, there > is no need to recursively walk the directory tree. > Updates: > - override listFiles and listLocatedStatus by using bulk listing > - some minor updates in hadoop-aliyun index.md -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15111) AliyunOSS: backport HADOOP-14993 to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292120#comment-16292120 ] SammiChen commented on HADOOP-15111: Thanks [~uncleGen] for the work. My + 1. Committed to branch-2 and branch-2.9. > AliyunOSS: backport HADOOP-14993 to branch-2 > > > Key: HADOOP-15111 > URL: https://issues.apache.org/jira/browse/HADOOP-15111 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu > Attachments: HADOOP-15111-branch-2.001.patch > > > Do a bulk listing off all entries under a path in one single operation, there > is no need to recursively walk the directory tree. > Updates: > - override listFiles and listLocatedStatus by using bulk listing > - some minor updates in hadoop-aliyun index.md -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15104) AliyunOSS: change the default value of max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292072#comment-16292072 ] SammiChen commented on HADOOP-15104: Committed to branch-2 and branch-2.9. Thanks Jinhu for the work. > AliyunOSS: change the default value of max error retry > -- > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change the default value of max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15104: --- Fix Version/s: 2.9.1 2.10.0 > AliyunOSS: change the default value of max error retry > -- > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10054) ViewFsFileStatus.toString() is broken
[ https://issues.apache.org/jira/browse/HADOOP-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292024#comment-16292024 ] Hanisha Koneru commented on HADOOP-10054: - Thank you [~xyao] for committing the patch. > ViewFsFileStatus.toString() is broken > - > > Key: HADOOP-10054 > URL: https://issues.apache.org/jira/browse/HADOOP-10054 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.0.5-alpha >Reporter: Paul Han >Assignee: Hanisha Koneru >Priority: Minor > Fix For: 3.0.1 > > Attachments: HADOOP-10054.001.patch, HADOOP-10054.002.patch > > > ViewFsFileStatus.toString is broken. Following code snippet : > {code} > FileStatus stat= somefunc(); // somefunc() returns an instance of > ViewFsFileStatus > System.out.println("path:" + stat.getPath()); > System.out.println(stat.toString()); > {code} > produces the output: > {code} > path:viewfs://x.com/user/X/tmp-48 > ViewFsFileStatus{path=null; isDirectory=false; length=0; replication=0; > blocksize=0; modification_time=0; access_time=0; owner=; group=; > permission=rw-rw-rw-; isSymlink=false} > {code} > Note that "path=null" is not correct. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14993) AliyunOSS: Override listFiles and listLocatedStatus
[ https://issues.apache.org/jira/browse/HADOOP-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292001#comment-16292001 ] Genmao Yu commented on HADOOP-14993: @SammiChen attach a new patch at HDOOP-15111 > AliyunOSS: Override listFiles and listLocatedStatus > > > Key: HADOOP-14993 > URL: https://issues.apache.org/jira/browse/HADOOP-14993 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: 3.0.0, 3.1.0, 3.0.1 > > Attachments: HADOOP-14993.001.patch, HADOOP-14993.002.patch, > HADOOP-14993.003.patch > > > Do a bulk listing off all entries under a path in one single operation, there > is no need to recursively walk the directory tree. > Updates: > - override listFiles and listLocatedStatus by using bulk listing > - some minor updates in hadoop-aliyun index.md -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14993) AliyunOSS: Override listFiles and listLocatedStatus
[ https://issues.apache.org/jira/browse/HADOOP-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292001#comment-16292001 ] Genmao Yu edited comment on HADOOP-14993 at 12/15/17 3:55 AM: -- [~Sammi] attach a new patch at HADOOP-15111 was (Author: unclegen): @SammiChen attach a new patch at HDOOP-15111 > AliyunOSS: Override listFiles and listLocatedStatus > > > Key: HADOOP-14993 > URL: https://issues.apache.org/jira/browse/HADOOP-14993 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: 3.0.0, 3.1.0, 3.0.1 > > Attachments: HADOOP-14993.001.patch, HADOOP-14993.002.patch, > HADOOP-14993.003.patch > > > Do a bulk listing off all entries under a path in one single operation, there > is no need to recursively walk the directory tree. > Updates: > - override listFiles and listLocatedStatus by using bulk listing > - some minor updates in hadoop-aliyun index.md -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291956#comment-16291956 ] genericqa commented on HADOOP-15114: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 30s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}107m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | | hadoop.ha.TestZKFailoverController | | | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem | | | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15114 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902187/HADOOP-15114.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d47190a74f44 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 95d4ec7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13836/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13836/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/13836/artifact/out/patch-asflicense-problems.txt | | Max.
[jira] [Commented] (HADOOP-12502) SetReplication OutOfMemoryError
[ https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291918#comment-16291918 ] Aaron Fabbri commented on HADOOP-12502: --- Hi, thanks for the work on this [~vinayrpet] and [~jojochuang]. Some questions on latest patch. Do we know where the most memory was going? Is it the references to all the listStatus() arrays held in the full recursion call tree in fs/shell/Command.java? (Each recursive call passes reference to that level's listStatus() array, meaning whole tree will be held in heap, right?) How does this patch fix the OOM issue? Is it because we're now holding RemoteIterator's for the whole directory tree in memory, instead of holding the actual listStatus arrays? {noformat} + public RemoteIterator listStatusIterator(Path p) + throws IOException { +// Not-using fs' listStatusIterator since iterator also expect to return +// sorted elements. +return new RemoteIterator() { + private final FileStatus[] stats = listStatus(p); {noformat} Why are we forcing ChecksumFilesystem to not use listStatusIterator(), and sorting the results here? This could increase memory usage, no? I don't think sorted iterator is required by the FS contract. The Ls.java changes seem tricky. I wonder if there is a simpler way of doing this (idea: Command exposes an overridable {{boolean isSorted()}} predicate that Ls.java can override if it needs sorting, and leave the traversal logic in Command instead of mucking with it in Ls?) {noformat} -// TODO: this really should be iterative {noformat} This comment is still true? I'm guessing the intent was "iterative" as in "not recursive", instead of "iterative" as in "using an iterator" {noformat} + protected void recursePath(PathData item) throws IOException { +if (!isRecursive() || isOrderTime() || isOrderSize() || isOrderReverse()) { + // use the non-iterative method for listing because explicit sorting is {noformat} You mean "non-recursive", right? Or maybe "non-iterator". {noformat} + // required based on time/size/reverse or Total number of entries + // required to print summary first when non-recursive. + processPaths(item, item.getDirectoryContents()); {noformat} What about the depth++ depth-- accounting in Command.recursePaths() that you skip here? Is the logic that Ls does not use getDepth()? Seems brittle. {noformat} +} else { + super.recursePath(item); +} + } + {noformat} Why does PathData.getDirectoryContents() sort its listing? {noformat} @Override protected void processPaths(PathData parent, RemoteIterator itemsIterator) throws IOException { if (pathOnly) { // If there is a need of printing only paths, then iterator can be used // directly. super.processPaths(parent, itemsIterator); return; } /* * LS output should be formatted properly. Grouping 100 items and formatting * the output to reduce the creation of huge sized arrays. This method will * be called only when recursive is set. */ List items = new ArrayList<>(); while (itemsIterator.hasNext()) { while (items.size() < 100 && itemsIterator.hasNext()) { items.add(itemsIterator.next()); } processPaths(parent, items.toArray(new PathData[items.size()])); items.clear(); {noformat} I guess this is much of the memory savings. I guess this chunking into 100 works without changing the depth-first search ordering. What about the sorting in existing {{Ls#processPaths()}}? That changes because we now only sort the batches of 100. I like the idea of chunking the depth first search (DFS) into blocks of 100 and releasing references on the way up. Wouldn't we want to do this in Command instead of Ls? Two reasons: (1) other commands benefit (2) less brittle in terms of how recursion logic is wired up between Command and Ls. > SetReplication OutOfMemoryError > --- > > Key: HADOOP-12502 > URL: https://issues.apache.org/jira/browse/HADOOP-12502 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.3.0 >Reporter: Philipp Schuegerl >Assignee: Vinayakumar B > Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch, > HADOOP-12502-03.patch, HADOOP-12502-04.patch, HADOOP-12502-05.patch, > HADOOP-12502-06.patch, HADOOP-12502-07.patch > > > Setting the replication of a HDFS folder recursively can run out of memory. > E.g. with a large /var/log directory: > hdfs dfs -setrep -R -w 1 /var/log > Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit > exceeded > at java.util.Arrays.copyOfRange(Arrays.java:2694) > at java.lang.String.(String.java:203) > at java.lang.String.substring(String.java:1913) > at java.net.URI$Parser.substring(URI.java:2850) >
[jira] [Updated] (HADOOP-15120) Reduce the overhead of AzureNativeFileSystemStore's rename
[ https://issues.apache.org/jira/browse/HADOOP-15120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yin Huai updated HADOOP-15120: -- Description: AzureNativeFileSystemStore's rename calls [waitForCopyToComplete|https://github.com/apache/hadoop/blob/d69b7358b65128197b4c0fe5ef3c02f3d59864b3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java#L2742]. If the copy executed inside waitForCopyToComplete cannot finish quickly, waitForCopyToComplete will wait for at least 1 second to check the status again. This behavior can introduce a significant overhead if the actual data operation finishes pretty fast. > Reduce the overhead of AzureNativeFileSystemStore's rename > -- > > Key: HADOOP-15120 > URL: https://issues.apache.org/jira/browse/HADOOP-15120 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Reporter: Yin Huai > > AzureNativeFileSystemStore's rename calls > [waitForCopyToComplete|https://github.com/apache/hadoop/blob/d69b7358b65128197b4c0fe5ef3c02f3d59864b3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java#L2742]. > If the copy executed inside waitForCopyToComplete cannot finish quickly, > waitForCopyToComplete will wait for at least 1 second to check the status > again. This behavior can introduce a significant overhead if the actual data > operation finishes pretty fast. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15027) AliyunOSS: Improvements for Hadoop read from AliyunOSS
[ https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291908#comment-16291908 ] wujinhu commented on HADOOP-15027: -- [~ste...@apache.org] [~drankye] [~uncleGen] [~Sammi] Please take a review, this patch based on HADOOP-15039 that refactor *_SemaphoredDelegatingExecutor_*. > AliyunOSS: Improvements for Hadoop read from AliyunOSS > -- > > Key: HADOOP-15027 > URL: https://issues.apache.org/jira/browse/HADOOP-15027 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0 >Reporter: wujinhu >Assignee: wujinhu > Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, > HADOOP-15027.003.patch, HADOOP-15027.004.patch > > > Currently, read performance is poor when Hadoop reads from AliyunOSS. It > needs about 1min to read 1GB from OSS. > Class AliyunOSSInputStream uses single thread to read data from AliyunOSS, > so we can refactor this by using multi-thread pre read to improve this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15120) AzureNativeFileSystemStore
Yin Huai created HADOOP-15120: - Summary: AzureNativeFileSystemStore Key: HADOOP-15120 URL: https://issues.apache.org/jira/browse/HADOOP-15120 Project: Hadoop Common Issue Type: Bug Reporter: Yin Huai -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15120) Reduce the overhead of AzureNativeFileSystemStore's rename
[ https://issues.apache.org/jira/browse/HADOOP-15120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yin Huai updated HADOOP-15120: -- Issue Type: Improvement (was: Bug) > Reduce the overhead of AzureNativeFileSystemStore's rename > -- > > Key: HADOOP-15120 > URL: https://issues.apache.org/jira/browse/HADOOP-15120 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Reporter: Yin Huai > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15120) Reduce the overhead of AzureNativeFileSystemStore's rename
[ https://issues.apache.org/jira/browse/HADOOP-15120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yin Huai updated HADOOP-15120: -- Component/s: fs/azure > Reduce the overhead of AzureNativeFileSystemStore's rename > -- > > Key: HADOOP-15120 > URL: https://issues.apache.org/jira/browse/HADOOP-15120 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Reporter: Yin Huai > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15120) Reduce the overhead of AzureNativeFileSystemStore's rename
[ https://issues.apache.org/jira/browse/HADOOP-15120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yin Huai updated HADOOP-15120: -- Summary: Reduce the overhead of AzureNativeFileSystemStore's rename (was: AzureNativeFileSystemStore) > Reduce the overhead of AzureNativeFileSystemStore's rename > -- > > Key: HADOOP-15120 > URL: https://issues.apache.org/jira/browse/HADOOP-15120 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Yin Huai > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15119) AzureNativeFileSystemStore's rename swallow InterruptedExceptions
Yin Huai created HADOOP-15119: - Summary: AzureNativeFileSystemStore's rename swallow InterruptedExceptions Key: HADOOP-15119 URL: https://issues.apache.org/jira/browse/HADOOP-15119 Project: Hadoop Common Issue Type: Bug Components: fs/azure Reporter: Yin Huai AzureNativeFileSystemStore's rename calls [waitForCopyToComplete|https://github.com/apache/hadoop/blob/d69b7358b65128197b4c0fe5ef3c02f3d59864b3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java#L2742], which swallow InterruptedExceptions and prevent the current thread being from interrupted. Once we catch the exception, it will be nice to call Thread.currentThread().interrupt(). So, if this thread is blocked at a later time, an InterruptedException will be properly thrown. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15118) Change default classpath to be only shaded jars
[ https://issues.apache.org/jira/browse/HADOOP-15118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291869#comment-16291869 ] Allen Wittenauer commented on HADOOP-15118: --- Ideally, yes. Also, just to clarify: bq. give complete hdfs classpath? This isn't a thing. There is no specific hdfs classpath. hdfs classpath = yarn classpath = hadoop classpath If we're going to break the world, that should get undone as well. i.e., hdfs classpath really does return just what HDFS needs. In order to do that, we'd need to take the logic that we use to build the tools dependencies and apply it everywhere. For example, 'hdfs namenode' would build the specific classpath for the NN. 'hdfs classpath' would build the specific classpath just for hdfs clients. In the end, we'd likely break up the share directory even further. That's not a bad thing. Things get tricky with tools like AWS, but we could treat it like we do now: everything gets them. All that said, this is a project for 4.x. And 3.x just got a release today. So we're looking at least 4 years out. > Change default classpath to be only shaded jars > --- > > Key: HADOOP-15118 > URL: https://issues.apache.org/jira/browse/HADOOP-15118 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > It would be desirable to change the default classpath to be just the shaded > jars. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15118) Change default classpath to be only shaded jars
[ https://issues.apache.org/jira/browse/HADOOP-15118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291863#comment-16291863 ] Bharat Viswanadham commented on HADOOP-15118: - Thank You [~aw] for your response. So, now when users run we need to give classpath of only shaded jars, and have other option like --full(for example) to give complete hdfs classpath? > Change default classpath to be only shaded jars > --- > > Key: HADOOP-15118 > URL: https://issues.apache.org/jira/browse/HADOOP-15118 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > It would be desirable to change the default classpath to be just the shaded > jars. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-15114: Attachment: HADOOP-15114.002.patch [~ste...@apache.org], thanks for review. Adding test case in patch v2. > Add closeStreams(...) to IOUtils > > > Key: HADOOP-15114 > URL: https://issues.apache.org/jira/browse/HADOOP-15114 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch > > > Add closeStreams(...) in IOUtils. Originally suggested by [Jason > Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291859#comment-16291859 ] Ajay Kumar edited comment on HADOOP-15114 at 12/15/17 1:02 AM: --- [~ste...@apache.org], thanks for review. Addressed checkstyle issues and added test case in patch v2. was (Author: ajayydv): [~ste...@apache.org], thanks for review. Adding test case in patch v2. > Add closeStreams(...) to IOUtils > > > Key: HADOOP-15114 > URL: https://issues.apache.org/jira/browse/HADOOP-15114 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch > > > Add closeStreams(...) in IOUtils. Originally suggested by [Jason > Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15118) Change default classpath to be only shaded jars
[ https://issues.apache.org/jira/browse/HADOOP-15118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291858#comment-16291858 ] Allen Wittenauer commented on HADOOP-15118: --- Linking HADOOP-13952 to show where we know tools that have external dependencies. Any change here will impact that JIRA issue as well. > Change default classpath to be only shaded jars > --- > > Key: HADOOP-15118 > URL: https://issues.apache.org/jira/browse/HADOOP-15118 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > It would be desirable to change the default classpath to be just the shaded > jars. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15118) Change default classpath to be only shaded jars
[ https://issues.apache.org/jira/browse/HADOOP-15118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-15118: -- Hadoop Flags: Incompatible change > Change default classpath to be only shaded jars > --- > > Key: HADOOP-15118 > URL: https://issues.apache.org/jira/browse/HADOOP-15118 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > It would be desirable to change the default classpath to be just the shaded > jars. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15118) Change default classpath to be only shaded jars
[ https://issues.apache.org/jira/browse/HADOOP-15118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-15118: -- Description: It would be desirable to change the default classpath to be just the shaded jars. (was: [root@n001 hadoop]# bin/hdfs dfs -rm / Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/htrace/core/Tracer$Builder at org.apache.hadoop.fs.FsShell.run(FsShell.java:303) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:389) Caused by: java.lang.ClassNotFoundException: org.apache.htrace.core.Tracer$Builder at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 4 more cc [~busbey]) > Change default classpath to be only shaded jars > --- > > Key: HADOOP-15118 > URL: https://issues.apache.org/jira/browse/HADOOP-15118 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > It would be desirable to change the default classpath to be just the shaded > jars. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15118) Change default classpath to be only shaded jars
[ https://issues.apache.org/jira/browse/HADOOP-15118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-15118: -- Summary: Change default classpath to be only shaded jars (was: HDFS commands throws error, when only shaded clients in classpath) > Change default classpath to be only shaded jars > --- > > Key: HADOOP-15118 > URL: https://issues.apache.org/jira/browse/HADOOP-15118 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > [root@n001 hadoop]# bin/hdfs dfs -rm / > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/htrace/core/Tracer$Builder > at org.apache.hadoop.fs.FsShell.run(FsShell.java:303) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:389) > Caused by: java.lang.ClassNotFoundException: > org.apache.htrace.core.Tracer$Builder > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > ... 4 more > cc [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-15118) HDFS commands throws error, when only shaded clients in classpath
[ https://issues.apache.org/jira/browse/HADOOP-15118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer moved HDFS-12916 to HADOOP-15118: -- Key: HADOOP-15118 (was: HDFS-12916) Project: Hadoop Common (was: Hadoop HDFS) > HDFS commands throws error, when only shaded clients in classpath > - > > Key: HADOOP-15118 > URL: https://issues.apache.org/jira/browse/HADOOP-15118 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > [root@n001 hadoop]# bin/hdfs dfs -rm / > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/htrace/core/Tracer$Builder > at org.apache.hadoop.fs.FsShell.run(FsShell.java:303) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:389) > Caused by: java.lang.ClassNotFoundException: > org.apache.htrace.core.Tracer$Builder > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > ... 4 more > cc [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291347#comment-16291347 ] Ajay Kumar edited comment on HADOOP-15114 at 12/15/17 12:27 AM: [~jlowe], Could you please have look at patch. Please let me know if this warrants a test case. (Other methods {{IOUtils#cleanupWithLogger, IOUtils#closeStream}} doesn't have one). was (Author: ajayydv): [~jlowe], Could you please have look at patch. Please let me know if this warrants a test case. (Other methods {IOUtils#cleanupWithLogger, IOUtils#closeStream} doesn't have one). > Add closeStreams(...) to IOUtils > > > Key: HADOOP-15114 > URL: https://issues.apache.org/jira/browse/HADOOP-15114 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HADOOP-15114.001.patch > > > Add closeStreams(...) in IOUtils. Originally suggested by [Jason > Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291775#comment-16291775 ] Íñigo Goiri commented on HADOOP-15106: -- Given HADOOP-15117, I think this is enough. +1 > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HADOOP-15106.01.patch, > HADOOP-15106.02.patch, HADOOP-15106.03.patch, HADOOP-15106.04.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15113) NPE in S3A getFileStatus: null instrumentation on using closed instance
[ https://issues.apache.org/jira/browse/HADOOP-15113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291604#comment-16291604 ] genericqa commented on HADOOP-15113: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 33s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15113 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902146/HADOOP-15113-001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 593ebb5c7844 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 37efa67 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13835/testReport/ | | Max. process+thread count | 334 (vs. ulimit of 5000) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13835/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > NPE in S3A getFileStatus: null instrumentation on using closed instance > --- > > Key: HADOOP-15113
[jira] [Commented] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291581#comment-16291581 ] genericqa commented on HADOOP-15114: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 3 new + 16 unchanged - 0 fixed = 19 total (was 16) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 4s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}105m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15114 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902131/HADOOP-15114.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8b97878558d5 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f8af0e2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13832/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13832/testReport/ | | Max. process+thread count | 1361 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common U:
[jira] [Commented] (HADOOP-13974) S3a CLI to support list/purge of pending multipart commits
[ https://issues.apache.org/jira/browse/HADOOP-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291560#comment-16291560 ] genericqa commented on HADOOP-13974: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 13s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 3 new + 17 unchanged - 0 fixed = 20 total (was 17) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 37s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-13974 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902135/HADOOP-13974.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b4958b9ff37b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f8af0e2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13834/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13834/testReport/ | | Max. process+thread count | 333 (vs. ulimit of 5000) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13834/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291557#comment-16291557 ] Steve Loughran commented on HADOOP-1: - yes, you use Configuration.getPassword() and it will forward the resolution to whatever is the declared key manager, which will be JCEKS files, in the absence of any KMS service. If you are using it, you get it for free...you just need to document & ideally test > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, > HADOOP-1.12.patch, HADOOP-1.13.patch, HADOOP-1.2.patch, > HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, > HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, > HADOOP-1.9.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support for explicit FTPS (SSL/TLS) > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291559#comment-16291559 ] genericqa commented on HADOOP-15106: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 48s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 97m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15106 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902130/HADOOP-15106.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a3fb34e918dc 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f8af0e2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit |
[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291552#comment-16291552 ] Steve Loughran commented on HADOOP-13171: - I should add I've had a mixed experience using the Storage Stats for some work where I'm trying to estimate cost of jobs, because its shared across FS instances and threads. So on any query engine running >1 query in a thread, even if you have separate FS instances, all their stats get aggregated. Stops me producing useful values in the committers (which publish stats to their .pending files): they overcount everything as each task done on the same process duplicates the values. I'd really like a storage stats counter which is ThreadLocal, and every metric updating a counter doing it also for the thread. That could be expensive, so very much something you'd only want to enable when you planned to use it. At the same time, for the query engines to explicitly collect this stuff themselves, it'd be good as some public FS option, "enable threadlocal stats" > Add StorageStatistics to S3A; instrument some more operations > - > > Key: HADOOP-13171 > URL: https://issues.apache.org/jira/browse/HADOOP-13171 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-13171-014.patch, HADOOP-13171-016.patch, > HADOOP-13171-branch-2-001.patch, HADOOP-13171-branch-2-002.patch, > HADOOP-13171-branch-2-003.patch, HADOOP-13171-branch-2-004.patch, > HADOOP-13171-branch-2-005.patch, HADOOP-13171-branch-2-006.patch, > HADOOP-13171-branch-2-007.patch, HADOOP-13171-branch-2-008.patch, > HADOOP-13171-branch-2-009.patch, HADOOP-13171-branch-2-010.patch, > HADOOP-13171-branch-2-011.patch, HADOOP-13171-branch-2-012.patch, > HADOOP-13171-branch-2-013.patch, HADOOP-13171-branch-2-015.patch, > HADOOP-13171-branch-2.8-017.patch > > > Add {{StorageStatistics}} support to S3A, collecting the same metrics as the > instrumentation, but sharing across all instances. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291545#comment-16291545 ] Steve Loughran commented on HADOOP-13171: - No fundamental reason; they really just evolved so we could see what the read performance issues where with seeks& aborts, so the streams.toString() prints them. The upload stuff is more collecting useful stuff on a system. if you can see value in aggregating them all.: do it. One thing: the input stream stats aren't collected in the live stream, so there's no performance hit of using them; it's only in the close() that they get pulled up. For the write stuff (and committer code) everything is done through the atomic longs > Add StorageStatistics to S3A; instrument some more operations > - > > Key: HADOOP-13171 > URL: https://issues.apache.org/jira/browse/HADOOP-13171 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-13171-014.patch, HADOOP-13171-016.patch, > HADOOP-13171-branch-2-001.patch, HADOOP-13171-branch-2-002.patch, > HADOOP-13171-branch-2-003.patch, HADOOP-13171-branch-2-004.patch, > HADOOP-13171-branch-2-005.patch, HADOOP-13171-branch-2-006.patch, > HADOOP-13171-branch-2-007.patch, HADOOP-13171-branch-2-008.patch, > HADOOP-13171-branch-2-009.patch, HADOOP-13171-branch-2-010.patch, > HADOOP-13171-branch-2-011.patch, HADOOP-13171-branch-2-012.patch, > HADOOP-13171-branch-2-013.patch, HADOOP-13171-branch-2-015.patch, > HADOOP-13171-branch-2.8-017.patch > > > Add {{StorageStatistics}} support to S3A, collecting the same metrics as the > instrumentation, but sharing across all instances. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15113) NPE in S3A getFileStatus: null instrumentation on using closed instance
[ https://issues.apache.org/jira/browse/HADOOP-15113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15113: Target Version/s: 3.1.0 Status: Patch Available (was: In Progress) > NPE in S3A getFileStatus: null instrumentation on using closed instance > --- > > Key: HADOOP-15113 > URL: https://issues.apache.org/jira/browse/HADOOP-15113 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-15113-001.patch > > > NPE in getFileStatus in a downstream test of mine; s3a ireland > {{PathMetadata pm = metadataStore.get(path, needEmptyDirectoryFlag);. }} > Something up with the bucket config? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15113) NPE in S3A getFileStatus: null instrumentation on using closed instance
[ https://issues.apache.org/jira/browse/HADOOP-15113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15113: Attachment: HADOOP-15113-001.patch Patch 001 * adds an entryPoint() call which does the check as well as incrementing the instrumentation counter * declared for all public methods * adds a test Testing: s3 ireland There's no checks on the internal operations. I'm tempted to add to the ops which are used by input/output streams & via writeOperationsHelper, to catch the case: FS closed while FS is in use > NPE in S3A getFileStatus: null instrumentation on using closed instance > --- > > Key: HADOOP-15113 > URL: https://issues.apache.org/jira/browse/HADOOP-15113 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-15113-001.patch > > > NPE in getFileStatus in a downstream test of mine; s3a ireland > {{PathMetadata pm = metadataStore.get(path, needEmptyDirectoryFlag);. }} > Something up with the bucket config? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10054) ViewFsFileStatus.toString() is broken
[ https://issues.apache.org/jira/browse/HADOOP-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291529#comment-16291529 ] Hudson commented on HADOOP-10054: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13377 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13377/]) HADOOP-10054. ViewFsFileStatus.toString() is broken. Contributed by (xyao: rev 37efa67e377e7fc251ee0088098f4b1700d21823) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java > ViewFsFileStatus.toString() is broken > - > > Key: HADOOP-10054 > URL: https://issues.apache.org/jira/browse/HADOOP-10054 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.0.5-alpha >Reporter: Paul Han >Assignee: Hanisha Koneru >Priority: Minor > Fix For: 3.0.1 > > Attachments: HADOOP-10054.001.patch, HADOOP-10054.002.patch > > > ViewFsFileStatus.toString is broken. Following code snippet : > {code} > FileStatus stat= somefunc(); // somefunc() returns an instance of > ViewFsFileStatus > System.out.println("path:" + stat.getPath()); > System.out.println(stat.toString()); > {code} > produces the output: > {code} > path:viewfs://x.com/user/X/tmp-48 > ViewFsFileStatus{path=null; isDirectory=false; length=0; replication=0; > blocksize=0; modification_time=0; access_time=0; owner=; group=; > permission=rw-rw-rw-; isSymlink=false} > {code} > Note that "path=null" is not correct. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15093) Deprecation of yarn.resourcemanager.zk-address is undocumented
[ https://issues.apache.org/jira/browse/HADOOP-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar reassigned HADOOP-15093: --- Assignee: Ajay Kumar > Deprecation of yarn.resourcemanager.zk-address is undocumented > -- > > Key: HADOOP-15093 > URL: https://issues.apache.org/jira/browse/HADOOP-15093 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.9.0, 3.0.0, 3.1.0 >Reporter: Eric Wohlstadter >Assignee: Ajay Kumar > Labels: documentation > > "yarn.resourcemanager.zk-address" was deprecated in 2.9.x and moved to > "hadoop.zk.address". However this doesn't appear in Deprecated Properties. > Additionally, the Configuration base class doesn't auto-translate from > "yarn.resourcemanager.zk-address" to "hadoop.zk.address". Only the sub-class > YarnConfiguration does the translation. > Also, the 2.9+ Resource Manager HA documentation still refers to the use of > "yarn.resourcemanager.zk-address". > https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10054) ViewFsFileStatus.toString() is broken
[ https://issues.apache.org/jira/browse/HADOOP-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-10054: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.1 Status: Resolved (was: Patch Available) Thanks [~hanishakoneru] for the contribution and all for the reviews. I've committed the patch to the trunk. > ViewFsFileStatus.toString() is broken > - > > Key: HADOOP-10054 > URL: https://issues.apache.org/jira/browse/HADOOP-10054 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.0.5-alpha >Reporter: Paul Han >Assignee: Hanisha Koneru >Priority: Minor > Fix For: 3.0.1 > > Attachments: HADOOP-10054.001.patch, HADOOP-10054.002.patch > > > ViewFsFileStatus.toString is broken. Following code snippet : > {code} > FileStatus stat= somefunc(); // somefunc() returns an instance of > ViewFsFileStatus > System.out.println("path:" + stat.getPath()); > System.out.println(stat.toString()); > {code} > produces the output: > {code} > path:viewfs://x.com/user/X/tmp-48 > ViewFsFileStatus{path=null; isDirectory=false; length=0; replication=0; > blocksize=0; modification_time=0; access_time=0; owner=; group=; > permission=rw-rw-rw-; isSymlink=false} > {code} > Note that "path=null" is not correct. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291480#comment-16291480 ] Steve Loughran commented on HADOOP-15114: - Add a test. Examples: *no streams, * null stream (closeStreams(stream, null, stream)) * stream set to throw an IOE on close * stream se to thrown an NPE on close > Add closeStreams(...) to IOUtils > > > Key: HADOOP-15114 > URL: https://issues.apache.org/jira/browse/HADOOP-15114 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HADOOP-15114.001.patch > > > Add closeStreams(...) in IOUtils. Originally suggested by [Jason > Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291482#comment-16291482 ] Ajay Kumar commented on HADOOP-15101: - [~zhoutai.zt], We can update {{hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md}}. > what testListStatusFile verified not consistent with listStatus declaration > in FileSystem > --- > > Key: HADOOP-15101 > URL: https://issues.apache.org/jira/browse/HADOOP-15101 > Project: Hadoop Common > Issue Type: Bug > Components: fs, test >Affects Versions: 3.0.0-beta1 >Reporter: zhoutai.zt >Priority: Critical > > {code} > @Test > public void testListStatusFile() throws Throwable { > describe("test the listStatus(path) on a file"); > Path f = touchf("liststatusfile"); > verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); > } > {code} > In this case, first create a file _f_, then listStatus on _f_,expect > listStatus returns an array of one FileStatus. But this is not consistent > with the declarations in FileSystem, i.e. > {code} > " > List the statuses of the files/directories in the given path if the path is a > directory. > Parameters: > f given path > Returns: > the statuses of the files/directories in the given patch > " > {code} > Which is the expected? The behave in fs contract test or in FileSystem? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291467#comment-16291467 ] genericqa commented on HADOOP-15106: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 43s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 95m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15106 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902119/HADOOP-15106.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5c70934b60ad 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f8af0e2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13830/testReport/ | | Max. process+thread count | 1497 (vs. ulimit of 5000) | | modules | C:
[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291458#comment-16291458 ] genericqa commented on HADOOP-14788: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 56s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 92m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-14788 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902118/HADOOP-14788.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 265185097b26 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f8af0e2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13829/testReport/ | | Max. process+thread count | 1718 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13829/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key:
[jira] [Updated] (HADOOP-13974) S3a CLI to support list/purge of pending multipart commits
[ https://issues.apache.org/jira/browse/HADOOP-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-13974: -- Attachment: HADOOP-13974.006.patch [~ste...@apache.org] this is almost ready. I had to add eventually() retries around multipart list assertions that were flaky. I want to convince myself this really is eventual consistency before we commit this. If so we need to consider any implications for the magic committer.. haven't thought through it yet. v6 patch: - Fix NPE in some unit tests: made a sacrifice to the Mockito gods (update S3A Mock FS to return something from getWriteOperationHelper())). - Add eventually() around a list MPU test assertion that was failing occasionally. - Remove a redundant abortUpload() function: something similar was added for Committer work so use that instead. - Cleanup test logging a bit, fix typo in CLI usage. - Fix a couple test issues from before I added the "-force" flag for "-abort" option. Note these tests use the S3Guard assumption so they only run with that profile. Ran all integration *and* unit tests in us-west-2 > S3a CLI to support list/purge of pending multipart commits > -- > > Key: HADOOP-13974 > URL: https://issues.apache.org/jira/browse/HADOOP-13974 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Aaron Fabbri > Attachments: HADOOP-13974.001.patch, HADOOP-13974.002.patch, > HADOOP-13974.003.patch, HADOOP-13974.004.patch, HADOOP-13974.005.patch, > HADOOP-13974.006.patch > > > The S3A CLI will need to be able to list and delete pending multipart > commits. > We can do the cleanup already via fs.s3a properties. The CLI will let scripts > stat for outstanding data (have a different exit code) and permit batch jobs > to explicitly trigger cleanups. > This will become critical with the multipart committer, as there's a > significantly higher likelihood of commits remaining outstanding. > We may also want to be able to enumerate/cancel all pending commits in the FS > tree -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-15117) open(PathHandle) contract test should be exhaustive for default options
[ https://issues.apache.org/jira/browse/HADOOP-15117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas moved HDFS-12926 to HADOOP-15117: --- Target Version/s: 3.1.0 (was: 3.1.0) Key: HADOOP-15117 (was: HDFS-12926) Project: Hadoop Common (was: Hadoop HDFS) > open(PathHandle) contract test should be exhaustive for default options > --- > > Key: HADOOP-15117 > URL: https://issues.apache.org/jira/browse/HADOOP-15117 > Project: Hadoop Common > Issue Type: Test >Reporter: Chris Douglas > > The current {{AbstractContractOpenTest}} covers many, but not all of the > permutations of the default {{HandleOpt}}. It could also be refactored to be > clearer as documentation -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-15106: --- Attachment: HADOOP-15106.04.patch On reflection... the contract test should be exhaustive and be easier to read. Opened HADOOP-15117 > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HADOOP-15106.01.patch, > HADOOP-15106.02.patch, HADOOP-15106.03.patch, HADOOP-15106.04.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291380#comment-16291380 ] Chris Douglas commented on HADOOP-15106: bq. Is there any unit test covering this? Yes, the contract tests. I added more checks as part of HDFS-12882, but they're not exhaustive. The negative cases are updated to catch {{InvalidPathHandleException}} instead of {{IOException}} in the patch. bq. How about moving InvalidPathHandleException above IOException in the javadoc to be consistent with others like getFileStatus (e.g., FileNotFoundException goes above IOException) Sure, np. > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HADOOP-15106.01.patch, > HADOOP-15106.02.patch, HADOOP-15106.03.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291379#comment-16291379 ] Virajith Jalaparti commented on HADOOP-15106: - Minor comment -- How about moving {{InvalidPathHandleException}} above {{IOException}} in the javadoc to be consistent with others like {{getFileStatus}} (e.g., {{FileNotFoundException}} goes above {{IOException}}).? Other than that, lgtm. > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HADOOP-15106.01.patch, > HADOOP-15106.02.patch, HADOOP-15106.03.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291372#comment-16291372 ] Íñigo Goiri commented on HADOOP-15106: -- The doc change looks good. Is there any unit test covering this? I remember that the positive cases were tested in some class (which one was it?), can we add an assert for this exception? > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HADOOP-15106.01.patch, > HADOOP-15106.02.patch, HADOOP-15106.03.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-15114: Status: Patch Available (was: Open) > Add closeStreams(...) to IOUtils > > > Key: HADOOP-15114 > URL: https://issues.apache.org/jira/browse/HADOOP-15114 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HADOOP-15114.001.patch > > > Add closeStreams(...) in IOUtils. Originally suggested by [Jason > Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-15114: Attachment: HADOOP-15114.001.patch > Add closeStreams(...) to IOUtils > > > Key: HADOOP-15114 > URL: https://issues.apache.org/jira/browse/HADOOP-15114 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HADOOP-15114.001.patch > > > Add closeStreams(...) in IOUtils. Originally suggested by [Jason > Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-15114: Attachment: (was: HADOOP-15114.001.patch) > Add closeStreams(...) to IOUtils > > > Key: HADOOP-15114 > URL: https://issues.apache.org/jira/browse/HADOOP-15114 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > > Add closeStreams(...) in IOUtils. Originally suggested by [Jason > Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-15106: --- Attachment: HADOOP-15106.03.patch Good idea. Added an example to the doc > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HADOOP-15106.01.patch, > HADOOP-15106.02.patch, HADOOP-15106.03.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-15114: Attachment: HADOOP-15114.001.patch [~jlowe], Could you please have look at patch. Please let me know if this warrants a test case. (Other methods {IOUtils#cleanupWithLogger, IOUtils#closeStream} doesn't have one). > Add closeStreams(...) to IOUtils > > > Key: HADOOP-15114 > URL: https://issues.apache.org/jira/browse/HADOOP-15114 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HADOOP-15114.001.patch > > > Add closeStreams(...) in IOUtils. Originally suggested by [Jason > Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291290#comment-16291290 ] Íñigo Goiri commented on HADOOP-15106: -- Can we add a couple examples of: {code} /** * Thrown when the constraints enoded in a {@link PathHandle} do not hold. */ {code} In {{InvalidPathHandleException}}. > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HADOOP-15106.01.patch, > HADOOP-15106.02.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291287#comment-16291287 ] Chris Douglas commented on HADOOP-15106: /cc [~virajith], [~ste...@apache.org], [~elgoiri] > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HADOOP-15106.01.patch, > HADOOP-15106.02.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-15106: --- Attachment: HADOOP-15106.02.patch Fixed checkstyle. The unit tests failures are related, but the failure was in removing the {{final}} modifier from {{getPathHandle(Path,HandleOpt...)}}. I'd forgotten why that was required; added a comment. > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HADOOP-15106.01.patch, > HADOOP-15106.02.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14788: Attachment: HADOOP-14788.004.patch > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, > HADOOP-14788.003.patch, HADOOP-14788.004.patch > > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14788: Attachment: (was: HADOOP-14788.004.patch) > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, > HADOOP-14788.003.patch > > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14788: Attachment: HADOOP-14788.004.patch > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, > HADOOP-14788.003.patch, HADOOP-14788.004.patch > > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291276#comment-16291276 ] Ajay Kumar edited comment on HADOOP-14788 at 12/14/17 5:59 PM: --- [~hanishakoneru] thanks for review. In patch v1 IOException was returned. Changed it to PathIOException on [~ste...@apache.org] [suggestion|https://issues.apache.org/jira/browse/HADOOP-14788?focusedCommentId=16183926=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16183926] (Assuming i understood it correctly. :)). {{In the method description of wrapException, "if exception" string is repeated.}} Thanks for catching that, addressed in patch v4. was (Author: ajayydv): [~hanishakoneru] thanks for review. In patch v1 IOException was returned. Changed it to PathIOException on [~ste...@apache.org] [suggestion|https://issues.apache.org/jira/browse/HADOOP-14788?focusedCommentId=16183926=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16183926] (Assuming i understood it correctly. :)). In the method description of wrapException, "if exception" string is repeated. Thanks for catching that, addressed in patch v4. > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, > HADOOP-14788.003.patch, HADOOP-14788.004.patch > > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291276#comment-16291276 ] Ajay Kumar commented on HADOOP-14788: - [~hanishakoneru] thanks for review. In patch v1 IOException was returned. Changed it to PathIOException on [~ste...@apache.org] [suggestion|https://issues.apache.org/jira/browse/HADOOP-14788?focusedCommentId=16183926=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16183926] (Assuming i understood it correctly. :)). In the method description of wrapException, "if exception" string is repeated. Thanks for catching that, addressed in patch v4. > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, > HADOOP-14788.003.patch > > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14914) Change to a safely casting long to int.
[ https://issues.apache.org/jira/browse/HADOOP-14914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291248#comment-16291248 ] Ajay Kumar commented on HADOOP-14914: - [~vagarychen],[~yufeigu] thanks for review and commit. > Change to a safely casting long to int. > > > Key: HADOOP-14914 > URL: https://issues.apache.org/jira/browse/HADOOP-14914 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Yufei Gu >Assignee: Ajay Kumar > Fix For: 3.1.0 > > Attachments: HADOOP-14914.001.patch, HADOOP-14914.002.patch > > > There are bunches of casting long to int like this: > {code} > long l = 123 > int i = (int) l; > {code} > This is not a safe cast. if l is greater than Integer.MAX_VALUE, i would be > negative, which is an unexpected behavior. We probably at least want to throw > an exception in that case. I suggest to use {{Math.toIntExact(longValue)}} to > replace them, which throws an exception if the value overflows an int. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13625) Document FileSystem actions that trigger update of modification time.
[ https://issues.apache.org/jira/browse/HADOOP-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291214#comment-16291214 ] Steve Loughran commented on HADOOP-13625: - As usual: what does HDFS do? > Document FileSystem actions that trigger update of modification time. > - > > Key: HADOOP-13625 > URL: https://issues.apache.org/jira/browse/HADOOP-13625 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Reporter: Chris Nauroth > > Hadoop users and developers of Hadoop-compatible file systems have sometimes > asked questions about which file system actions trigger an update of the > path's modification time. This issue proposes to document which actions do > and do not update modification time, so that the information is easy to find > without reading HDFS code or manually testing individual cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB
[ https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291165#comment-16291165 ] Ajay Kumar commented on HADOOP-15109: - [~zhoutai.zt], thanks for review. > TestDFSIO -read -random doesn't work on file sized 4GB > -- > > Key: HADOOP-15109 > URL: https://issues.apache.org/jira/browse/HADOOP-15109 > Project: Hadoop Common > Issue Type: Bug > Components: fs, test >Affects Versions: 3.0.0-beta1 >Reporter: zhoutai.zt >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-15109.001.patch, HADOOP-15109.002.patch, Screen > Shot 2017-12-11 at 3.17.22 PM.png > > > TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The > cause is: > {code:java} > private long nextOffset(long current) { > if(skipSize == 0) > return rnd.nextInt((int)(fileSize)); > if(skipSize > 0) > return (current < 0) ? 0 : (current + bufferSize + skipSize); > // skipSize < 0 > return (current < 0) ? Math.max(0, fileSize - bufferSize) : > Math.max(0, current + skipSize); > } > } > {code} > When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) > will be negative and cause Random.nextInt throws IllegalArgumentException("n > must be positive"). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors
[ https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291147#comment-16291147 ] Hudson commented on HADOOP-15085: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13376 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13376/]) HADOOP-15085. Output streams closed with IOUtils suppressing write (jlowe: rev f8af0e2feb9f45aeaa9711dbf93115ffb1a07e5d) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java * (edit) hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/MiniKMS.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java > Output streams closed with IOUtils suppressing write errors > --- > > Key: HADOOP-15085 > URL: https://issues.apache.org/jira/browse/HADOOP-15085 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jason Lowe >Assignee: Jim Brennan > Fix For: 3.1.0, 3.0.1 > > Attachments: HADOOP-15085.001.patch, HADOOP-15085.002.patch, > HADOOP-15085.003.patch, HADOOP-15085.004.patch, HADOOP-15085.005.patch > > > There are a few places in hadoop-common that are closing an output stream > with IOUtils.cleanupWithLogger like this: > {code} > try { > ...write to outStream... > } finally { > IOUtils.cleanupWithLogger(LOG, outStream); > } > {code} > This suppresses any IOException that occurs during the close() method which > could lead to partial/corrupted output without throwing a corresponding > exception. The code should either use try-with-resources or explicitly close > the stream within the try block so the exception thrown during close() is > properly propagated as exceptions during write operations are. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors
[ https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated HADOOP-15085: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.1 3.1.0 Status: Resolved (was: Patch Available) Thanks, Jim! I committed this to trunk and branch-3.0. The problem also exists in 2.x releases. Would you be willing to provide a patch for branch-2? > Output streams closed with IOUtils suppressing write errors > --- > > Key: HADOOP-15085 > URL: https://issues.apache.org/jira/browse/HADOOP-15085 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jason Lowe >Assignee: Jim Brennan > Fix For: 3.1.0, 3.0.1 > > Attachments: HADOOP-15085.001.patch, HADOOP-15085.002.patch, > HADOOP-15085.003.patch, HADOOP-15085.004.patch, HADOOP-15085.005.patch > > > There are a few places in hadoop-common that are closing an output stream > with IOUtils.cleanupWithLogger like this: > {code} > try { > ...write to outStream... > } finally { > IOUtils.cleanupWithLogger(LOG, outStream); > } > {code} > This suppresses any IOException that occurs during the close() method which > could lead to partial/corrupted output without throwing a corresponding > exception. The code should either use try-with-resources or explicitly close > the stream within the try block so the exception thrown during close() is > properly propagated as exceptions during write operations are. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors
[ https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291106#comment-16291106 ] Jason Lowe commented on HADOOP-15085: - Thanks for updating the patch! The unit test failure is unrelated +1 for the latest patch. Committing this. > Output streams closed with IOUtils suppressing write errors > --- > > Key: HADOOP-15085 > URL: https://issues.apache.org/jira/browse/HADOOP-15085 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jason Lowe >Assignee: Jim Brennan > Attachments: HADOOP-15085.001.patch, HADOOP-15085.002.patch, > HADOOP-15085.003.patch, HADOOP-15085.004.patch, HADOOP-15085.005.patch > > > There are a few places in hadoop-common that are closing an output stream > with IOUtils.cleanupWithLogger like this: > {code} > try { > ...write to outStream... > } finally { > IOUtils.cleanupWithLogger(LOG, outStream); > } > {code} > This suppresses any IOException that occurs during the close() method which > could lead to partial/corrupted output without throwing a corresponding > exception. The code should either use try-with-resources or explicitly close > the stream within the try block so the exception thrown during close() is > properly propagated as exceptions during write operations are. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15027) AliyunOSS: Improvements for Hadoop read from AliyunOSS
[ https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291053#comment-16291053 ] genericqa commented on HADOOP-15027: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 3s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 12s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 10 new + 0 unchanged - 0 fixed = 10 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15027 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902085/HADOOP-15027.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 247bf0b46188 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2564b4d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13828/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13828/testReport/ | | Max. process+thread count | 301 (vs. ulimit of 5000) | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13828/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org |
[jira] [Commented] (HADOOP-15027) AliyunOSS: Improvements for Hadoop read from AliyunOSS
[ https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290933#comment-16290933 ] wujinhu commented on HADOOP-15027: -- attach patch based on HADOOP-15039 > AliyunOSS: Improvements for Hadoop read from AliyunOSS > -- > > Key: HADOOP-15027 > URL: https://issues.apache.org/jira/browse/HADOOP-15027 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0 >Reporter: wujinhu >Assignee: wujinhu > Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, > HADOOP-15027.003.patch, HADOOP-15027.004.patch > > > Currently, read performance is poor when Hadoop reads from AliyunOSS. It > needs about 1min to read 1GB from OSS. > Class AliyunOSSInputStream uses single thread to read data from AliyunOSS, > so we can refactor this by using multi-thread pre read to improve this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15027) AliyunOSS: Improvements for Hadoop read from AliyunOSS
[ https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15027: - Attachment: HADOOP-15027.004.patch > AliyunOSS: Improvements for Hadoop read from AliyunOSS > -- > > Key: HADOOP-15027 > URL: https://issues.apache.org/jira/browse/HADOOP-15027 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0 >Reporter: wujinhu >Assignee: wujinhu > Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, > HADOOP-15027.003.patch, HADOOP-15027.004.patch > > > Currently, read performance is poor when Hadoop reads from AliyunOSS. It > needs about 1min to read 1GB from OSS. > Class AliyunOSSInputStream uses single thread to read data from AliyunOSS, > so we can refactor this by using multi-thread pre read to improve this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290915#comment-16290915 ] Lukas Waldmann commented on HADOOP-1: - Steve, I was thinking about the security - is there some shared keychain manager functioanlity in hadoop? That would allow us to share login information across the cluster without revealing sensitive information > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, > HADOOP-1.12.patch, HADOOP-1.13.patch, HADOOP-1.2.patch, > HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, > HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, > HADOOP-1.9.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support for explicit FTPS (SSL/TLS) > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290664#comment-16290664 ] genericqa commented on HADOOP-15106: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 33s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 6s{color} | {color:orange} root: The patch generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 14s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.TestFilterFileSystem | | | hadoop.fs.TestHarFileSystem | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15106 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902022/HADOOP-15106.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ab003da4be66 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 91c96bd | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs |
[jira] [Updated] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-15106: --- Attachment: HADOOP-15106.01.patch > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HADOOP-15106.01.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-15106: --- Attachment: (was: HDFS-15106.01.patch) > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HADOOP-15106.01.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-15106: --- Attachment: HDFS-15106.01.patch > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HDFS-15106.01.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-15106: --- Status: Patch Available (was: Open) > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch, HDFS-15106.01.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org