[jira] [Commented] (HADOOP-9092) Coverage fixing for org.apache.hadoop.mapreduce.jobhistory
[ https://issues.apache.org/jira/browse/HADOOP-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505274#comment-13505274 ] Hadoop QA commented on HADOOP-9092: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12555123/HADOOP-9092-trunk-a.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1833//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1833//console This message is automatically generated. > Coverage fixing for org.apache.hadoop.mapreduce.jobhistory > --- > > Key: HADOOP-9092 > URL: https://issues.apache.org/jira/browse/HADOOP-9092 > Project: Hadoop Common > Issue Type: Test > Components: tools >Reporter: Aleksey Gorshkov > Attachments: HADOOP-9092-branch-0.23-a.patch, > HADOOP-9092-branch-0.23.patch, HADOOP-9092-branch-2-a.patch, > HADOOP-9092-branch-2.patch, HADOOP-9092-trunk-a.patch, HADOOP-9092-trunk.patch > > > Coverage fixing for package org.apache.hadoop.mapreduce.jobhistory -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9092) Coverage fixing for org.apache.hadoop.mapreduce.jobhistory
[ https://issues.apache.org/jira/browse/HADOOP-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Gorshkov updated HADOOP-9092: - Attachment: HADOOP-9092-trunk-a.patch HADOOP-9092-branch-2-a.patch HADOOP-9092-branch-0.23-a.patch > Coverage fixing for org.apache.hadoop.mapreduce.jobhistory > --- > > Key: HADOOP-9092 > URL: https://issues.apache.org/jira/browse/HADOOP-9092 > Project: Hadoop Common > Issue Type: Test > Components: tools >Reporter: Aleksey Gorshkov > Attachments: HADOOP-9092-branch-0.23-a.patch, > HADOOP-9092-branch-0.23.patch, HADOOP-9092-branch-2-a.patch, > HADOOP-9092-branch-2.patch, HADOOP-9092-trunk-a.patch, HADOOP-9092-trunk.patch > > > Coverage fixing for package org.apache.hadoop.mapreduce.jobhistory -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9092) Coverage fixing for org.apache.hadoop.mapreduce.jobhistory
[ https://issues.apache.org/jira/browse/HADOOP-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505262#comment-13505262 ] Aleksey Gorshkov commented on HADOOP-9092: -- I've corrected code > Coverage fixing for org.apache.hadoop.mapreduce.jobhistory > --- > > Key: HADOOP-9092 > URL: https://issues.apache.org/jira/browse/HADOOP-9092 > Project: Hadoop Common > Issue Type: Test > Components: tools >Reporter: Aleksey Gorshkov > Attachments: HADOOP-9092-branch-0.23-a.patch, > HADOOP-9092-branch-0.23.patch, HADOOP-9092-branch-2-a.patch, > HADOOP-9092-branch-2.patch, HADOOP-9092-trunk-a.patch, HADOOP-9092-trunk.patch > > > Coverage fixing for package org.apache.hadoop.mapreduce.jobhistory -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8532) [Configuration] Increase or make variable substitution depth configurable
[ https://issues.apache.org/jira/browse/HADOOP-8532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505245#comment-13505245 ] Aaron T. Myers commented on HADOOP-8532: Raising it to a higher value is fine by me. > [Configuration] Increase or make variable substitution depth configurable > - > > Key: HADOOP-8532 > URL: https://issues.apache.org/jira/browse/HADOOP-8532 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 2.0.0-alpha >Reporter: Harsh J > > We've had some users recently complain that the default MAX_SUBST hardcode of > 20 isn't sufficient for their substitution needs and they wished it were > configurable rather than having to roll about with workarounds such as using > temporary smaller substitutes and then building the fuller one after it. We > should consider raising the default hardcode, or provide a way to make it > configurable instead. > Related: HIVE-2021 changed something similar for their HiveConf classes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9018) Reject invalid Windows URIs
[ https://issues.apache.org/jira/browse/HADOOP-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-9018: -- Attachment: HADOOP-9018.patch Patch to address some of the issues raised in reference to HADOOP-8953 and HADOOP-8977. Makes path parsing stricter on Windows. > Reject invalid Windows URIs > --- > > Key: HADOOP-9018 > URL: https://issues.apache.org/jira/browse/HADOOP-9018 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: trunk-win > Environment: Windows >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HADOOP-9018.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > This JIRA is to make handling of improperly constructed file URIs for Windows > local paths more rigorous. e.g. reject "file:///c:\\Windows" > Valid file URI syntax explained at > http://blogs.msdn.com/b/ie/archive/2006/12/06/file-uris-in-windows.aspx. > Also see https://issues.apache.org/jira/browse/HADOOP-8953 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature
[ https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505130#comment-13505130 ] wolfgang hoschek commented on HADOOP-8989: -- In addition, would be good to have a -starttime option to express the time related options (e.g. -mmin, -mtime, -amin, -atime) to work relative to a specific absolute timestamp (say, give me all files modified since yesterday midnight) instead of relative to whatever "now" happens to be at the time of command execution. The dateTimePattern might be ISO 8601 by default, and could be set to any java.text.SimpleDateFormat format pattern. The FindOptions class already seems to foresee such usage. It's just a matter of exposing it at the command level. > hadoop dfs -find feature > > > Key: HADOOP-8989 > URL: https://issues.apache.org/jira/browse/HADOOP-8989 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Marco Nicosia >Assignee: Jonathan Allen > Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch > > > Both sysadmins and users make frequent use of the unix 'find' command, but > Hadoop has no correlate. Without this, users are writing scripts which make > heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs > -lsr is somewhat taxing on the NameNode, and a really slow experience on the > client side. Possibly an in-NameNode find operation would be only a bit more > taxing on the NameNode, but significantly faster from the client's point of > view? > The minimum set of options I can think of which would make a Hadoop find > command generally useful is (in priority order): > * -type (file or directory, for now) > * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments) > * -print0 (for piping to xargs -0) > * -depth > * -owner/-group (and -nouser/-nogroup) > * -name (allowing for shell pattern, or even regex?) > * -perm > * -size > One possible special case, but could possibly be really cool if it ran from > within the NameNode: > * -delete > The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow. > Lower priority, some people do use operators, mostly to execute -or searches > such as: > * find / \(-nouser -or -nogroup\) > Finally, I thought I'd include a link to the [Posix spec for > find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9100) Fix TODO items in FsShell
Suresh Srinivas created HADOOP-9100: --- Summary: Fix TODO items in FsShell Key: HADOOP-9100 URL: https://issues.apache.org/jira/browse/HADOOP-9100 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Suresh Srinivas Assignee: Suresh Srinivas Priority: Minor Bunch of TODO items in FsShell needs to be fixed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address
[ https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE resolved HADOOP-9099. Resolution: Fixed Fix Version/s: 1-win 1.2.0 I have committed this. Thanks, Ivan! > NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an > IP address > --- > > Key: HADOOP-9099 > URL: https://issues.apache.org/jira/browse/HADOOP-9099 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic >Priority: Minor > Fix For: 1.2.0, 1-win > > Attachments: HADOOP-9099.branch-1-win.patch > > > I just hit this failure. We should use some more unique string for > "UnknownHost": > Testcase: testNormalizeHostName took 0.007 sec > FAILED > expected:<[65.53.5.181]> but was:<[UnknownHost]> > junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but > was:<[UnknownHost]> > at > org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347) > Will post a patch in a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address
[ https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HADOOP-9099: --- Component/s: test Priority: Minor (was: Major) Hadoop Flags: Reviewed +1 patch looks good. > NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an > IP address > --- > > Key: HADOOP-9099 > URL: https://issues.apache.org/jira/browse/HADOOP-9099 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic >Priority: Minor > Attachments: HADOOP-9099.branch-1-win.patch > > > I just hit this failure. We should use some more unique string for > "UnknownHost": > Testcase: testNormalizeHostName took 0.007 sec > FAILED > expected:<[65.53.5.181]> but was:<[UnknownHost]> > junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but > was:<[UnknownHost]> > at > org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347) > Will post a patch in a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address
[ https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505082#comment-13505082 ] Tsz Wo (Nicholas), SZE commented on HADOOP-9099: I understand the problem after looked at Ivan's patch - "UnknownHost" was resolved to an IP in your setting. > NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an > IP address > --- > > Key: HADOOP-9099 > URL: https://issues.apache.org/jira/browse/HADOOP-9099 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic > Attachments: HADOOP-9099.branch-1-win.patch > > > I just hit this failure. We should use some more unique string for > "UnknownHost": > Testcase: testNormalizeHostName took 0.007 sec > FAILED > expected:<[65.53.5.181]> but was:<[UnknownHost]> > junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but > was:<[UnknownHost]> > at > org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347) > Will post a patch in a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature
[ https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505080#comment-13505080 ] Allen Wittenauer commented on HADOOP-8989: -- Has anyone studied the impact on the NN yet? > hadoop dfs -find feature > > > Key: HADOOP-8989 > URL: https://issues.apache.org/jira/browse/HADOOP-8989 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Marco Nicosia >Assignee: Jonathan Allen > Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch > > > Both sysadmins and users make frequent use of the unix 'find' command, but > Hadoop has no correlate. Without this, users are writing scripts which make > heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs > -lsr is somewhat taxing on the NameNode, and a really slow experience on the > client side. Possibly an in-NameNode find operation would be only a bit more > taxing on the NameNode, but significantly faster from the client's point of > view? > The minimum set of options I can think of which would make a Hadoop find > command generally useful is (in priority order): > * -type (file or directory, for now) > * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments) > * -print0 (for piping to xargs -0) > * -depth > * -owner/-group (and -nouser/-nogroup) > * -name (allowing for shell pattern, or even regex?) > * -perm > * -size > One possible special case, but could possibly be really cool if it ran from > within the NameNode: > * -delete > The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow. > Lower priority, some people do use operators, mostly to execute -or searches > such as: > * find / \(-nouser -or -nogroup\) > Finally, I thought I'd include a link to the [Posix spec for > find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address
[ https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505079#comment-13505079 ] Tsz Wo (Nicholas), SZE commented on HADOOP-9099: TestNetUtils in branch-1-win does not fail in my machine. Is there any special setting in your machine/network? BTW, I just have a quick check of the InetAddress source code (http://www.docjar.com/html/api/java/net/InetAddress.java.html). It seems that it won't do any lookup if the input string is already an numerical IP address. However, it is probably jdk dependent. > NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an > IP address > --- > > Key: HADOOP-9099 > URL: https://issues.apache.org/jira/browse/HADOOP-9099 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic > Attachments: HADOOP-9099.branch-1-win.patch > > > I just hit this failure. We should use some more unique string for > "UnknownHost": > Testcase: testNormalizeHostName took 0.007 sec > FAILED > expected:<[65.53.5.181]> but was:<[UnknownHost]> > junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but > was:<[UnknownHost]> > at > org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347) > Will post a patch in a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature
[ https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505074#comment-13505074 ] wolfgang hoschek commented on HADOOP-8989: -- I gave it a try and this patch is awesome. Would be really useful to add the following options as defined here: http://linux.die.net/man/1/find -regextype "java" (no need for other regex flavours here, but should be possible to add classic flavours later) -regex -maxdepth -mindepth -printf > hadoop dfs -find feature > > > Key: HADOOP-8989 > URL: https://issues.apache.org/jira/browse/HADOOP-8989 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Marco Nicosia >Assignee: Jonathan Allen > Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch > > > Both sysadmins and users make frequent use of the unix 'find' command, but > Hadoop has no correlate. Without this, users are writing scripts which make > heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs > -lsr is somewhat taxing on the NameNode, and a really slow experience on the > client side. Possibly an in-NameNode find operation would be only a bit more > taxing on the NameNode, but significantly faster from the client's point of > view? > The minimum set of options I can think of which would make a Hadoop find > command generally useful is (in priority order): > * -type (file or directory, for now) > * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments) > * -print0 (for piping to xargs -0) > * -depth > * -owner/-group (and -nouser/-nogroup) > * -name (allowing for shell pattern, or even regex?) > * -perm > * -size > One possible special case, but could possibly be really cool if it ran from > within the NameNode: > * -delete > The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow. > Lower priority, some people do use operators, mostly to execute -or searches > such as: > * find / \(-nouser -or -nogroup\) > Finally, I thought I'd include a link to the [Posix spec for > find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9090) Refactor MetricsSystemImpl to allow for an on-demand publish system
[ https://issues.apache.org/jira/browse/HADOOP-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505067#comment-13505067 ] Luke Lu commented on HADOOP-9090: - I think I understand what you're doing now. I'm concerned about creating a thread for each record in the publishMetricsNow and the usage of thread interrupt/stop, which seems heavy handed and doesn't guaranteed to work either (setDaemon(true) on the worker thread would help though). How about make the putMetricsImediate call the regular putMetrics and wait for the publish to complete? The signalling is slightly tricky in this case. I'm sure you'll figure this out, though :) > Refactor MetricsSystemImpl to allow for an on-demand publish system > --- > > Key: HADOOP-9090 > URL: https://issues.apache.org/jira/browse/HADOOP-9090 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics >Reporter: Mostafa Elhemali >Priority: Minor > Attachments: HADOOP-9090.2.patch, > HADOOP-9090.justEnhanceDefaultImpl.2.patch, > HADOOP-9090.justEnhanceDefaultImpl.patch, HADOOP-9090.patch > > > We have a need to publish metrics out of some short-living processes, which > is not really well-suited to the current metrics system implementation which > periodically publishes metrics asynchronously (a behavior that works great > for long-living processes). Of course I could write my own metrics system, > but it seems like such a waste to rewrite all the awesome code currently in > the MetricsSystemImpl and supporting classes. > The way I'm proposing to solve this is to: > 1. Refactor the MetricsSystemImpl class into an abstract base > MetricsSystemImpl class (common configuration and other code) and a concrete > PeriodicPublishMetricsSystemImpl class (timer thread). > 2. Refactor the MetricsSinkAdapter class into an abstract base > MetricsSinkAdapter class (common configuration and other code) and a concrete > AsyncMetricsSinkAdapter class (asynchronous publishing using the SinkQueue). > 3. Derive a new simple class OnDemandPublishMetricsSystemImpl from > MetricsSystemImpl, that just exposes a synchronous publish() method to do all > the work. > 4. Derive a SyncMetricsSinkAdapter class from MetricsSinkAdapter to just > synchronously push metrics to the underlying sink. > Does that sound reasonable? I'll attach the patch with all this coded up and > simple tests (could use some polish I guess, but wanted to get everyone's > opinion first). Notice that this is somewhat of a breaking change since > MetricsSystemImpl is public (although it's marked with > InterfaceAudience.Private); if the breaking change is a problem I could just > rename the refactored classes so that PeriodicPublishMetricsSystemImpl is > still called MetricsSystemImpl (and MetricsSystemImpl -> > BaseMetricsSystemImpl). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address
[ https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505062#comment-13505062 ] Mostafa Elhemali commented on HADOOP-9099: -- Personally I would've used something even more unlikely, such as "ThisIsNotARealHostName-ItJustCant", but I guess UnknownHost123 is not very likely to exist :). +1 (non-binding). > NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an > IP address > --- > > Key: HADOOP-9099 > URL: https://issues.apache.org/jira/browse/HADOOP-9099 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic > Attachments: HADOOP-9099.branch-1-win.patch > > > I just hit this failure. We should use some more unique string for > "UnknownHost": > Testcase: testNormalizeHostName took 0.007 sec > FAILED > expected:<[65.53.5.181]> but was:<[UnknownHost]> > junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but > was:<[UnknownHost]> > at > org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347) > Will post a patch in a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address
[ https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic updated HADOOP-9099: --- Attachment: HADOOP-9099.branch-1-win.patch Attaching the patch. > NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an > IP address > --- > > Key: HADOOP-9099 > URL: https://issues.apache.org/jira/browse/HADOOP-9099 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic > Attachments: HADOOP-9099.branch-1-win.patch > > > I just hit this failure. We should use some more unique string for > "UnknownHost": > Testcase: testNormalizeHostName took 0.007 sec > FAILED > expected:<[65.53.5.181]> but was:<[UnknownHost]> > junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but > was:<[UnknownHost]> > at > org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347) > Will post a patch in a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address
Ivan Mitic created HADOOP-9099: -- Summary: NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address Key: HADOOP-9099 URL: https://issues.apache.org/jira/browse/HADOOP-9099 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic I just hit this failure. We should use some more unique string for "UnknownHost": Testcase: testNormalizeHostName took 0.007 sec FAILED expected:<[65.53.5.181]> but was:<[UnknownHost]> junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but was:<[UnknownHost]> at org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347) Will post a patch in a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9090) Refactor MetricsSystemImpl to allow for an on-demand publish system
[ https://issues.apache.org/jira/browse/HADOOP-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505015#comment-13505015 ] Hadoop QA commented on HADOOP-9090: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12555070/HADOOP-9090.justEnhanceDefaultImpl.2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1832//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1832//console This message is automatically generated. > Refactor MetricsSystemImpl to allow for an on-demand publish system > --- > > Key: HADOOP-9090 > URL: https://issues.apache.org/jira/browse/HADOOP-9090 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics >Reporter: Mostafa Elhemali >Priority: Minor > Attachments: HADOOP-9090.2.patch, > HADOOP-9090.justEnhanceDefaultImpl.2.patch, > HADOOP-9090.justEnhanceDefaultImpl.patch, HADOOP-9090.patch > > > We have a need to publish metrics out of some short-living processes, which > is not really well-suited to the current metrics system implementation which > periodically publishes metrics asynchronously (a behavior that works great > for long-living processes). Of course I could write my own metrics system, > but it seems like such a waste to rewrite all the awesome code currently in > the MetricsSystemImpl and supporting classes. > The way I'm proposing to solve this is to: > 1. Refactor the MetricsSystemImpl class into an abstract base > MetricsSystemImpl class (common configuration and other code) and a concrete > PeriodicPublishMetricsSystemImpl class (timer thread). > 2. Refactor the MetricsSinkAdapter class into an abstract base > MetricsSinkAdapter class (common configuration and other code) and a concrete > AsyncMetricsSinkAdapter class (asynchronous publishing using the SinkQueue). > 3. Derive a new simple class OnDemandPublishMetricsSystemImpl from > MetricsSystemImpl, that just exposes a synchronous publish() method to do all > the work. > 4. Derive a SyncMetricsSinkAdapter class from MetricsSinkAdapter to just > synchronously push metrics to the underlying sink. > Does that sound reasonable? I'll attach the patch with all this coded up and > simple tests (could use some polish I guess, but wanted to get everyone's > opinion first). Notice that this is somewhat of a breaking change since > MetricsSystemImpl is public (although it's marked with > InterfaceAudience.Private); if the breaking change is a problem I could just > rename the refactored classes so that PeriodicPublishMetricsSystemImpl is > still called MetricsSystemImpl (and MetricsSystemImpl -> > BaseMetricsSystemImpl). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9088) Add Murmur3 hash
[ https://issues.apache.org/jira/browse/HADOOP-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505008#comment-13505008 ] Hadoop QA commented on HADOOP-9088: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12555068/murmur3-6.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1831//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1831//console This message is automatically generated. > Add Murmur3 hash > > > Key: HADOOP-9088 > URL: https://issues.apache.org/jira/browse/HADOOP-9088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Radim Kolar >Assignee: Radim Kolar > Attachments: murmur3-2.txt, murmur3-3.txt, murmur3-4.txt, > murmur3-5.txt, murmur3-6.txt, murmur3.txt > > > faster and better then murmur2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9090) Refactor MetricsSystemImpl to allow for an on-demand publish system
[ https://issues.apache.org/jira/browse/HADOOP-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mostafa Elhemali updated HADOOP-9090: - Attachment: HADOOP-9090.justEnhanceDefaultImpl.2.patch Thanks Luke. I've attached a new patch that guards against sinks taking too long (10 seconds by default) when consuming any single record in the publishMetricsNow() call. I do need to synchronize the publishing of metrics because publishNow() competes with the queue consumption thread for the sink. I've removed sinkLock though and just made the inner method synchronized (on the sink adapter itself) since no one else synchronizes on it so we there should be no contention. > Refactor MetricsSystemImpl to allow for an on-demand publish system > --- > > Key: HADOOP-9090 > URL: https://issues.apache.org/jira/browse/HADOOP-9090 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics >Reporter: Mostafa Elhemali >Priority: Minor > Attachments: HADOOP-9090.2.patch, > HADOOP-9090.justEnhanceDefaultImpl.2.patch, > HADOOP-9090.justEnhanceDefaultImpl.patch, HADOOP-9090.patch > > > We have a need to publish metrics out of some short-living processes, which > is not really well-suited to the current metrics system implementation which > periodically publishes metrics asynchronously (a behavior that works great > for long-living processes). Of course I could write my own metrics system, > but it seems like such a waste to rewrite all the awesome code currently in > the MetricsSystemImpl and supporting classes. > The way I'm proposing to solve this is to: > 1. Refactor the MetricsSystemImpl class into an abstract base > MetricsSystemImpl class (common configuration and other code) and a concrete > PeriodicPublishMetricsSystemImpl class (timer thread). > 2. Refactor the MetricsSinkAdapter class into an abstract base > MetricsSinkAdapter class (common configuration and other code) and a concrete > AsyncMetricsSinkAdapter class (asynchronous publishing using the SinkQueue). > 3. Derive a new simple class OnDemandPublishMetricsSystemImpl from > MetricsSystemImpl, that just exposes a synchronous publish() method to do all > the work. > 4. Derive a SyncMetricsSinkAdapter class from MetricsSinkAdapter to just > synchronously push metrics to the underlying sink. > Does that sound reasonable? I'll attach the patch with all this coded up and > simple tests (could use some polish I guess, but wanted to get everyone's > opinion first). Notice that this is somewhat of a breaking change since > MetricsSystemImpl is public (although it's marked with > InterfaceAudience.Private); if the breaking change is a problem I could just > rename the refactored classes so that PeriodicPublishMetricsSystemImpl is > still called MetricsSystemImpl (and MetricsSystemImpl -> > BaseMetricsSystemImpl). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9088) Add Murmur3 hash
[ https://issues.apache.org/jira/browse/HADOOP-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Radim Kolar updated HADOOP-9088: Attachment: murmur3-6.txt > Add Murmur3 hash > > > Key: HADOOP-9088 > URL: https://issues.apache.org/jira/browse/HADOOP-9088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Radim Kolar >Assignee: Radim Kolar > Attachments: murmur3-2.txt, murmur3-3.txt, murmur3-4.txt, > murmur3-5.txt, murmur3-6.txt, murmur3.txt > > > faster and better then murmur2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9088) Add Murmur3 hash
[ https://issues.apache.org/jira/browse/HADOOP-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504955#comment-13504955 ] Hadoop QA commented on HADOOP-9088: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12555064/murmur3-5.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:red}-1 javac{color:red}. The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1830//console This message is automatically generated. > Add Murmur3 hash > > > Key: HADOOP-9088 > URL: https://issues.apache.org/jira/browse/HADOOP-9088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Radim Kolar >Assignee: Radim Kolar > Attachments: murmur3-2.txt, murmur3-3.txt, murmur3-4.txt, > murmur3-5.txt, murmur3.txt > > > faster and better then murmur2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9088) Add Murmur3 hash
[ https://issues.apache.org/jira/browse/HADOOP-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Radim Kolar updated HADOOP-9088: Attachment: murmur3-5.txt > Add Murmur3 hash > > > Key: HADOOP-9088 > URL: https://issues.apache.org/jira/browse/HADOOP-9088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Radim Kolar >Assignee: Radim Kolar > Attachments: murmur3-2.txt, murmur3-3.txt, murmur3-4.txt, > murmur3-5.txt, murmur3.txt > > > faster and better then murmur2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9088) Add Murmur3 hash
[ https://issues.apache.org/jira/browse/HADOOP-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504945#comment-13504945 ] Hadoop QA commented on HADOOP-9088: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12555062/murmur3-4.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:red}-1 javac{color:red}. The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1829//console This message is automatically generated. > Add Murmur3 hash > > > Key: HADOOP-9088 > URL: https://issues.apache.org/jira/browse/HADOOP-9088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Radim Kolar >Assignee: Radim Kolar > Attachments: murmur3-2.txt, murmur3-3.txt, murmur3-4.txt, murmur3.txt > > > faster and better then murmur2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9088) Add Murmur3 hash
[ https://issues.apache.org/jira/browse/HADOOP-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504941#comment-13504941 ] Radim Kolar commented on HADOOP-9088: - bloom filter code is using util.Hash family of functions > Add Murmur3 hash > > > Key: HADOOP-9088 > URL: https://issues.apache.org/jira/browse/HADOOP-9088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Radim Kolar >Assignee: Radim Kolar > Attachments: murmur3-2.txt, murmur3-3.txt, murmur3-4.txt, murmur3.txt > > > faster and better then murmur2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9088) Add Murmur3 hash
[ https://issues.apache.org/jira/browse/HADOOP-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Radim Kolar updated HADOOP-9088: Attachment: murmur3-4.txt faster variant from mahout > Add Murmur3 hash > > > Key: HADOOP-9088 > URL: https://issues.apache.org/jira/browse/HADOOP-9088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Radim Kolar >Assignee: Radim Kolar > Attachments: murmur3-2.txt, murmur3-3.txt, murmur3-4.txt, murmur3.txt > > > faster and better then murmur2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9090) Refactor MetricsSystemImpl to allow for an on-demand publish system
[ https://issues.apache.org/jira/browse/HADOOP-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504926#comment-13504926 ] Luke Lu commented on HADOOP-9090: - The new patch is indeed much simpler :) I don't think the sinkLock is needed though, as publishMetrics is already synchronized. Another thing is error handling with publishMetricsNow. What if underlying sink hangs (due to network issues)? Do you want the process to potentially hang forever if the final metrics are not published? > Refactor MetricsSystemImpl to allow for an on-demand publish system > --- > > Key: HADOOP-9090 > URL: https://issues.apache.org/jira/browse/HADOOP-9090 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics >Reporter: Mostafa Elhemali >Priority: Minor > Attachments: HADOOP-9090.2.patch, > HADOOP-9090.justEnhanceDefaultImpl.patch, HADOOP-9090.patch > > > We have a need to publish metrics out of some short-living processes, which > is not really well-suited to the current metrics system implementation which > periodically publishes metrics asynchronously (a behavior that works great > for long-living processes). Of course I could write my own metrics system, > but it seems like such a waste to rewrite all the awesome code currently in > the MetricsSystemImpl and supporting classes. > The way I'm proposing to solve this is to: > 1. Refactor the MetricsSystemImpl class into an abstract base > MetricsSystemImpl class (common configuration and other code) and a concrete > PeriodicPublishMetricsSystemImpl class (timer thread). > 2. Refactor the MetricsSinkAdapter class into an abstract base > MetricsSinkAdapter class (common configuration and other code) and a concrete > AsyncMetricsSinkAdapter class (asynchronous publishing using the SinkQueue). > 3. Derive a new simple class OnDemandPublishMetricsSystemImpl from > MetricsSystemImpl, that just exposes a synchronous publish() method to do all > the work. > 4. Derive a SyncMetricsSinkAdapter class from MetricsSinkAdapter to just > synchronously push metrics to the underlying sink. > Does that sound reasonable? I'll attach the patch with all this coded up and > simple tests (could use some polish I guess, but wanted to get everyone's > opinion first). Notice that this is somewhat of a breaking change since > MetricsSystemImpl is public (although it's marked with > InterfaceAudience.Private); if the breaking change is a problem I could just > rename the refactored classes so that PeriodicPublishMetricsSystemImpl is > still called MetricsSystemImpl (and MetricsSystemImpl -> > BaseMetricsSystemImpl). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9088) Add Murmur3 hash
[ https://issues.apache.org/jira/browse/HADOOP-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504919#comment-13504919 ] Radim Kolar commented on HADOOP-9088: - Hash partitioner can do same thing like java hashtable - rehashing hashCode() to get better distribution. current hash quality in hadoop is low if you have lot of similar strings like "" "aab" you will get about 20% unoptimal partitions in average cases. but in some specific cases it can split like 80:20 istead of close 50:50 > Add Murmur3 hash > > > Key: HADOOP-9088 > URL: https://issues.apache.org/jira/browse/HADOOP-9088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Radim Kolar >Assignee: Radim Kolar > Attachments: murmur3-2.txt, murmur3-3.txt, murmur3.txt > > > faster and better then murmur2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7868) Hadoop native fails to compile when default linker option is -Wl,--as-needed
[ https://issues.apache.org/jira/browse/HADOOP-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Graves updated HADOOP-7868: -- Fix Version/s: 0.23.6 I pulled this into branch-0.23 > Hadoop native fails to compile when default linker option is -Wl,--as-needed > > > Key: HADOOP-7868 > URL: https://issues.apache.org/jira/browse/HADOOP-7868 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 1.0.0, 2.0.0-alpha > Environment: Ubuntu Precise, Ubuntu Oneiric, Debian Unstable >Reporter: James Page >Assignee: Trevor Robinson > Fix For: 1.2.0, 2.0.2-alpha, 0.23.6 > > Attachments: hadoop-7868-b1.txt, HADOOP-7868.patch, > HADOOP-7868-portable.patch > > > Recent releases of Ubuntu and Debian have switched to using --as-needed as > default when linking binaries. > As a result the AC_COMPUTE_NEEDED_DSO fails to find the required DSO names > during execution of configure resulting in a build failure. > Explicitly using "-Wl,--no-as-needed" in this macro when required resolves > this issue. > See http://wiki.debian.org/ToolChain/DSOLinking for a few more details -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9093) Move all the Exception in PathExceptions to o.a.h.fs package
[ https://issues.apache.org/jira/browse/HADOOP-9093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504891#comment-13504891 ] Tsz Wo (Nicholas), SZE commented on HADOOP-9093: > ... I suppose PathException could be an interface ... I also have thought about changing the base class to interface. However, it may not be very useful since we cannot catch interface as it no longer extends Throwable. > Move all the Exception in PathExceptions to o.a.h.fs package > > > Key: HADOOP-9093 > URL: https://issues.apache.org/jira/browse/HADOOP-9093 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Fix For: 2.0.3-alpha > > Attachments: HADOOP-9093.patch > > > The exceptions in PathExceptions are useful for non shell related > functionality as well. Making this available as exceptions under fs will help > move some of the HDFS implementation code throw more specific exception than > throwing IOException (for example see HDFS-4209). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9088) Add Murmur3 hash
[ https://issues.apache.org/jira/browse/HADOOP-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504886#comment-13504886 ] Todd Lipcon commented on HADOOP-9088: - I don't think HashPartitioner can be changed to use anything but Object.hashCode (which we assume the user has implemented). Changing WritableComparator to use a different hash hardly seems worth it. Have you seen cases where the existing hash code causes poor distribution? Also, I don't think we can change the default hashcode for existing types, since it would change user-visible partitioning behavior. Adding a new TextPartitioner which uses Murmur3 sounds useful, if you can show that there are indeed real-world datasets where the "poor hash behavior" of the existing partitioner causes skew. > Add Murmur3 hash > > > Key: HADOOP-9088 > URL: https://issues.apache.org/jira/browse/HADOOP-9088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Radim Kolar >Assignee: Radim Kolar > Attachments: murmur3-2.txt, murmur3-3.txt, murmur3.txt > > > faster and better then murmur2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9084) TotalOrderPartitioner fails on hadoop running on top of gpfs (or any parallel or distributed filesystem)
[ https://issues.apache.org/jira/browse/HADOOP-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504882#comment-13504882 ] Hudson commented on HADOOP-9084: Integrated in Hadoop-trunk-Commit #3063 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3063/]) HADOOP-9064. Ameding CHANGES.txt with correct JIRA ID, wrongfully used HADOOP-9084 before (tucu) (Revision 1414347) Result = SUCCESS tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1414347 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt > TotalOrderPartitioner fails on hadoop running on top of gpfs (or any parallel > or distributed filesystem) > > > Key: HADOOP-9084 > URL: https://issues.apache.org/jira/browse/HADOOP-9084 > Project: Hadoop Common > Issue Type: Bug > Components: filecache, fs, native >Affects Versions: 1.0.3, 1.0.4 >Reporter: giovanni delussu >Assignee: giovanni delussu >Priority: Critical > Fix For: 1.0.3, 1.0.4 > > Attachments: PATCH-HADOOP-9084-origin-branch-1.0.patch > > > When running a job who uses TotalOrderPartitioner (like TeraSort or > BulkImport of HBase) on hadoop running on top of gpfs (instead of hdfs) the > program fails to find the file _partition.lst because is looking for it in > the wrong directory. The confusion is between local fs meaning not hdfs and > local meaning distributed fs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9064) Augment DelegationTokenRenewer API to cancel the tokens on calls to removeRenewAction
[ https://issues.apache.org/jira/browse/HADOOP-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504883#comment-13504883 ] Hudson commented on HADOOP-9064: Integrated in Hadoop-trunk-Commit #3063 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3063/]) HADOOP-9064. Ameding CHANGES.txt with correct JIRA ID, wrongfully used HADOOP-9084 before (tucu) (Revision 1414347) Result = SUCCESS tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1414347 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt > Augment DelegationTokenRenewer API to cancel the tokens on calls to > removeRenewAction > - > > Key: HADOOP-9064 > URL: https://issues.apache.org/jira/browse/HADOOP-9064 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.0.2-alpha >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Fix For: 2.0.3-alpha > > Attachments: hadoop-9064.patch, hadoop-9064.patch, hadoop-9064.patch, > hadoop-9064.patch > > > Post HADOOP-9049, FileSystems register with DelegationTokenRenewer (a > singleton), to renew tokens. > To avoid a bunch of defunct tokens clog the NN, we should augment the API to > {{#removeRenewAction(boolean cancel)}} and cancel the token appropriately. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9056) Build native library on Windows
[ https://issues.apache.org/jira/browse/HADOOP-9056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504872#comment-13504872 ] Chuan Liu commented on HADOOP-9056: --- Looks good in general! I have the following questions and comments. # It seems this patch only contains changes that made compiling ‘hadoop.dll’ available on Windows. Some functionality was missing. For example, ‘POSIX.chmod’ was not ported over from branch-1-win; so were some other IO methods. I also notice some new functions were introduced in trunk compared with branch-1. For example, posixFadviseIfPossible() and POSIX.posix_fadvis(). What is the plan for those missing functions? Do we plan to port them over in other JIRAs? # “native.vcxproj.user” file is not needed. # Snappy does not work on Windows right now. It may be better to exclude snappy related files from Windows build. # I think it is better to use ‘WINDOWS’ instead of ‘ _WIN32’ macro in some places because we explicitly defined the ‘UNIX’ and ‘WINDOWS’ macros in the beginning of “org_apache_hadoop.h”. It will make it easier in the future if we want to change the definition of the macros. # Some changes in SecureIOUtils and Datanode are not ported over. Again I think this is related to point 1 above. > Build native library on Windows > --- > > Key: HADOOP-9056 > URL: https://issues.apache.org/jira/browse/HADOOP-9056 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: trunk-win >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: trunk-win > > Attachments: HADOOP-9056.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > The native library (hadoop.dll) must be compiled on Windows. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9084) TotalOrderPartitioner fails on hadoop running on top of gpfs (or any parallel or distributed filesystem)
[ https://issues.apache.org/jira/browse/HADOOP-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504868#comment-13504868 ] Hudson commented on HADOOP-9084: Integrated in Hadoop-trunk-Commit #3062 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3062/]) HADOOP-9084. Augment DelegationTokenRenewer API to cancel the tokens on calls to removeRenewAction. (kkambatl via tucu) (Revision 1414326) Result = SUCCESS tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1414326 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegationTokenRenewer.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegationTokenRenewer.java > TotalOrderPartitioner fails on hadoop running on top of gpfs (or any parallel > or distributed filesystem) > > > Key: HADOOP-9084 > URL: https://issues.apache.org/jira/browse/HADOOP-9084 > Project: Hadoop Common > Issue Type: Bug > Components: filecache, fs, native >Affects Versions: 1.0.3, 1.0.4 >Reporter: giovanni delussu >Assignee: giovanni delussu >Priority: Critical > Fix For: 1.0.3, 1.0.4 > > Attachments: PATCH-HADOOP-9084-origin-branch-1.0.patch > > > When running a job who uses TotalOrderPartitioner (like TeraSort or > BulkImport of HBase) on hadoop running on top of gpfs (instead of hdfs) the > program fails to find the file _partition.lst because is looking for it in > the wrong directory. The confusion is between local fs meaning not hdfs and > local meaning distributed fs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9088) Add Murmur3 hash
[ https://issues.apache.org/jira/browse/HADOOP-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504865#comment-13504865 ] Radim Kolar commented on HADOOP-9088: - to improve hashing. WritableComparator hash is weak (used by binarypartitioner), HashPartitioner sucks even more (using object.hashcode), i will add TextPartitioner using murmur3, it have good distribution of results. > Add Murmur3 hash > > > Key: HADOOP-9088 > URL: https://issues.apache.org/jira/browse/HADOOP-9088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Radim Kolar >Assignee: Radim Kolar > Attachments: murmur3-2.txt, murmur3-3.txt, murmur3.txt > > > faster and better then murmur2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9090) Refactor MetricsSystemImpl to allow for an on-demand publish system
[ https://issues.apache.org/jira/browse/HADOOP-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504862#comment-13504862 ] Hadoop QA commented on HADOOP-9090: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12555046/HADOOP-9090.justEnhanceDefaultImpl.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1828//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1828//console This message is automatically generated. > Refactor MetricsSystemImpl to allow for an on-demand publish system > --- > > Key: HADOOP-9090 > URL: https://issues.apache.org/jira/browse/HADOOP-9090 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics >Reporter: Mostafa Elhemali >Priority: Minor > Attachments: HADOOP-9090.2.patch, > HADOOP-9090.justEnhanceDefaultImpl.patch, HADOOP-9090.patch > > > We have a need to publish metrics out of some short-living processes, which > is not really well-suited to the current metrics system implementation which > periodically publishes metrics asynchronously (a behavior that works great > for long-living processes). Of course I could write my own metrics system, > but it seems like such a waste to rewrite all the awesome code currently in > the MetricsSystemImpl and supporting classes. > The way I'm proposing to solve this is to: > 1. Refactor the MetricsSystemImpl class into an abstract base > MetricsSystemImpl class (common configuration and other code) and a concrete > PeriodicPublishMetricsSystemImpl class (timer thread). > 2. Refactor the MetricsSinkAdapter class into an abstract base > MetricsSinkAdapter class (common configuration and other code) and a concrete > AsyncMetricsSinkAdapter class (asynchronous publishing using the SinkQueue). > 3. Derive a new simple class OnDemandPublishMetricsSystemImpl from > MetricsSystemImpl, that just exposes a synchronous publish() method to do all > the work. > 4. Derive a SyncMetricsSinkAdapter class from MetricsSinkAdapter to just > synchronously push metrics to the underlying sink. > Does that sound reasonable? I'll attach the patch with all this coded up and > simple tests (could use some polish I guess, but wanted to get everyone's > opinion first). Notice that this is somewhat of a breaking change since > MetricsSystemImpl is public (although it's marked with > InterfaceAudience.Private); if the breaking change is a problem I could just > rename the refactored classes so that PeriodicPublishMetricsSystemImpl is > still called MetricsSystemImpl (and MetricsSystemImpl -> > BaseMetricsSystemImpl). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9064) Augment DelegationTokenRenewer API to cancel the tokens on calls to removeRenewAction
[ https://issues.apache.org/jira/browse/HADOOP-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-9064: --- Resolution: Fixed Fix Version/s: 2.0.3-alpha Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks Karthik. Committed to trunk and branch-2. > Augment DelegationTokenRenewer API to cancel the tokens on calls to > removeRenewAction > - > > Key: HADOOP-9064 > URL: https://issues.apache.org/jira/browse/HADOOP-9064 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.0.2-alpha >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Fix For: 2.0.3-alpha > > Attachments: hadoop-9064.patch, hadoop-9064.patch, hadoop-9064.patch, > hadoop-9064.patch > > > Post HADOOP-9049, FileSystems register with DelegationTokenRenewer (a > singleton), to renew tokens. > To avoid a bunch of defunct tokens clog the NN, we should augment the API to > {{#removeRenewAction(boolean cancel)}} and cancel the token appropriately. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9064) Augment DelegationTokenRenewer API to cancel the tokens on calls to removeRenewAction
[ https://issues.apache.org/jira/browse/HADOOP-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504840#comment-13504840 ] Alejandro Abdelnur commented on HADOOP-9064: +1 > Augment DelegationTokenRenewer API to cancel the tokens on calls to > removeRenewAction > - > > Key: HADOOP-9064 > URL: https://issues.apache.org/jira/browse/HADOOP-9064 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.0.2-alpha >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: hadoop-9064.patch, hadoop-9064.patch, hadoop-9064.patch, > hadoop-9064.patch > > > Post HADOOP-9049, FileSystems register with DelegationTokenRenewer (a > singleton), to renew tokens. > To avoid a bunch of defunct tokens clog the NN, we should augment the API to > {{#removeRenewAction(boolean cancel)}} and cancel the token appropriately. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9090) Refactor MetricsSystemImpl to allow for an on-demand publish system
[ https://issues.apache.org/jira/browse/HADOOP-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mostafa Elhemali updated HADOOP-9090: - Attachment: HADOOP-9090.justEnhanceDefaultImpl.patch Thanks for the suggestion Luke! I've attached an alternative patch doing what you're suggesting: adding a publishMetricsNow() method that synchronously publishes all metrics on the same thread in the default implementation. The alternative patch is much simpler, and honestly I'm having a "why didn't I think of that?" moments right now. If people are OK with the change of the default implementation (and the addition of the new interface method) and I'm not missing any race conditions (I'll keep looking) then I think the simpler patch would work just fine for my purposes. > Refactor MetricsSystemImpl to allow for an on-demand publish system > --- > > Key: HADOOP-9090 > URL: https://issues.apache.org/jira/browse/HADOOP-9090 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics >Reporter: Mostafa Elhemali >Priority: Minor > Attachments: HADOOP-9090.2.patch, > HADOOP-9090.justEnhanceDefaultImpl.patch, HADOOP-9090.patch > > > We have a need to publish metrics out of some short-living processes, which > is not really well-suited to the current metrics system implementation which > periodically publishes metrics asynchronously (a behavior that works great > for long-living processes). Of course I could write my own metrics system, > but it seems like such a waste to rewrite all the awesome code currently in > the MetricsSystemImpl and supporting classes. > The way I'm proposing to solve this is to: > 1. Refactor the MetricsSystemImpl class into an abstract base > MetricsSystemImpl class (common configuration and other code) and a concrete > PeriodicPublishMetricsSystemImpl class (timer thread). > 2. Refactor the MetricsSinkAdapter class into an abstract base > MetricsSinkAdapter class (common configuration and other code) and a concrete > AsyncMetricsSinkAdapter class (asynchronous publishing using the SinkQueue). > 3. Derive a new simple class OnDemandPublishMetricsSystemImpl from > MetricsSystemImpl, that just exposes a synchronous publish() method to do all > the work. > 4. Derive a SyncMetricsSinkAdapter class from MetricsSinkAdapter to just > synchronously push metrics to the underlying sink. > Does that sound reasonable? I'll attach the patch with all this coded up and > simple tests (could use some polish I guess, but wanted to get everyone's > opinion first). Notice that this is somewhat of a breaking change since > MetricsSystemImpl is public (although it's marked with > InterfaceAudience.Private); if the breaking change is a problem I could just > rename the refactored classes so that PeriodicPublishMetricsSystemImpl is > still called MetricsSystemImpl (and MetricsSystemImpl -> > BaseMetricsSystemImpl). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9088) Add Murmur3 hash
[ https://issues.apache.org/jira/browse/HADOOP-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504824#comment-13504824 ] Todd Lipcon commented on HADOOP-9088: - Hi Radim. We don't currently use the murmur2 hash anywhere AFAIK. Where do you anticipate wanting to use this new hash function? > Add Murmur3 hash > > > Key: HADOOP-9088 > URL: https://issues.apache.org/jira/browse/HADOOP-9088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Radim Kolar >Assignee: Radim Kolar > Attachments: murmur3-2.txt, murmur3-3.txt, murmur3.txt > > > faster and better then murmur2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8981) TestMetricsSystemImpl fails on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504810#comment-13504810 ] Xuan Gong commented on HADOOP-8981: --- For this test failure, I do not think our code has any problems. So, basically, how the testInitFirst function works like this: it will register three sinks, named "Test","Sink1" and "Sink2", each of them has its own blockingQueue, initially the queue is empty, and if there are any items in the queue, they will consume them. In the testInitFirst function, what the function ms.onTimerEvent() did is to put items to all of the queues that let the sinks can consume the items. Followed by this function is stop function. Basically it stops all the threads. Why the test is fail is because the sink does not finish to consume the items, it is stopped by calling the function stop(). If we add Thread.sleep(sometime) before the stop function, the test will be successful.Original I think maybe we can add the checking condition to check if the queue is empty before we actually do the stop. But I think that the purpose of this stop() function is when I call this functions, all the sink threads should stop no matter they are idle or they are doing something. If we actually add the checking condition or other synchronized mechnism to make sure all the items in queue is consumed before we stop the thread, does this violate the purpose of calling stop function ? And actually the test is fail, on the other side, it proved that our stop function is actually worked correctly. When we call the stop, everything is stopped no matter what they are doing right now. Based on that, I runned this test several times, I found that sometimes it will be successful, sometimes half rigth half wrong, sometimes all fails. If we try to make the testcase pass, what we need to do is to change the test case to : verify(sink1, atMost(2)).putMetrics(r1.capture()); List mr1 = r1.getAllValues(); verify(sink2, atMost(2)).putMetrics(r2.capture()); List mr2 = r2.getAllValues(); if(mr1.size() != 0 && mr2.size() != 0){ checkMetricsRecords(mr1); assertEquals("output", mr1, mr2); }else if(mr1.size() != 0){ checkMetricsRecords(mr1); }else if(mr2.size() != 0){ checkMetricsRecords(mr2); } > TestMetricsSystemImpl fails on Windows > -- > > Key: HADOOP-8981 > URL: https://issues.apache.org/jira/browse/HADOOP-8981 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: trunk-win >Reporter: Chris Nauroth >Assignee: Arpit Agarwal > > The test is failing on an expected mock interaction. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9090) Refactor MetricsSystemImpl to allow for an on-demand publish system
[ https://issues.apache.org/jira/browse/HADOOP-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504783#comment-13504783 ] Luke Lu commented on HADOOP-9090: - I can imagine that you would want to control publishing the metrics at the end of a process. What about a longer running process that also want to periodically publish its metrics? Wouldn't it be better to enhance the metrics system to expose an ondemand publish and wait (until the metrics are sent or timed out) method, so the same metrics system can work with all processes? long running or not? > Refactor MetricsSystemImpl to allow for an on-demand publish system > --- > > Key: HADOOP-9090 > URL: https://issues.apache.org/jira/browse/HADOOP-9090 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics >Reporter: Mostafa Elhemali >Priority: Minor > Attachments: HADOOP-9090.2.patch, HADOOP-9090.patch > > > We have a need to publish metrics out of some short-living processes, which > is not really well-suited to the current metrics system implementation which > periodically publishes metrics asynchronously (a behavior that works great > for long-living processes). Of course I could write my own metrics system, > but it seems like such a waste to rewrite all the awesome code currently in > the MetricsSystemImpl and supporting classes. > The way I'm proposing to solve this is to: > 1. Refactor the MetricsSystemImpl class into an abstract base > MetricsSystemImpl class (common configuration and other code) and a concrete > PeriodicPublishMetricsSystemImpl class (timer thread). > 2. Refactor the MetricsSinkAdapter class into an abstract base > MetricsSinkAdapter class (common configuration and other code) and a concrete > AsyncMetricsSinkAdapter class (asynchronous publishing using the SinkQueue). > 3. Derive a new simple class OnDemandPublishMetricsSystemImpl from > MetricsSystemImpl, that just exposes a synchronous publish() method to do all > the work. > 4. Derive a SyncMetricsSinkAdapter class from MetricsSinkAdapter to just > synchronously push metrics to the underlying sink. > Does that sound reasonable? I'll attach the patch with all this coded up and > simple tests (could use some polish I guess, but wanted to get everyone's > opinion first). Notice that this is somewhat of a breaking change since > MetricsSystemImpl is public (although it's marked with > InterfaceAudience.Private); if the breaking change is a problem I could just > rename the refactored classes so that PeriodicPublishMetricsSystemImpl is > still called MetricsSystemImpl (and MetricsSystemImpl -> > BaseMetricsSystemImpl). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction
[ https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504760#comment-13504760 ] Robert Joseph Evans commented on HADOOP-9046: - I am glad that you caught it. Sorry I did not respond sooner. > provide unit-test coverage of class > org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction > -- > > Key: HADOOP-9046 > URL: https://issues.apache.org/jira/browse/HADOOP-9046 > Project: Hadoop Common > Issue Type: Test >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Attachments: HADOOP-9046-branch-0.23-over-9049.patch, > HADOOP-9046-branch-0.23.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch > > > The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction has zero > coverage in entire cumulative test run. Provide test(s) to cover this class. > Note: the request submitted to HDFS project because the class likely to be > tested by tests in that project. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9083) Port HADOOP-9020 Add a SASL PLAIN server to branch 1
[ https://issues.apache.org/jira/browse/HADOOP-9083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504701#comment-13504701 ] Daryn Sharp commented on HADOOP-9083: - Is this intended to be the first of a series of jiras to port PLAIN auth into 1.x? I've not completed the implementation on trunk/branch-2, so this jira doesn't provide all the support necessary to actually use PLAIN auth via RPC. Based on a quick read of the the hive jira, it sounds as if perhaps hive only intends to directly use the PLAIN server? > Port HADOOP-9020 Add a SASL PLAIN server to branch 1 > > > Key: HADOOP-9083 > URL: https://issues.apache.org/jira/browse/HADOOP-9083 > Project: Hadoop Common > Issue Type: Task > Components: ipc, security >Affects Versions: 1.0.3 >Reporter: Yu Gao >Assignee: Yu Gao > Attachments: HADOOP-9020-branch-1.patch, test-patch.result, > test-TestSaslRPC.result > > > It would be good if the patch of HADOOP-9020 for adding SASL PLAIN server > implementation could be ported to branch 1 as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9092) Coverage fixing for org.apache.hadoop.mapreduce.jobhistory
[ https://issues.apache.org/jira/browse/HADOOP-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504694#comment-13504694 ] Robert Joseph Evans commented on HADOOP-9092: - The change looks good. I have one minor comment still. in TestJobHistoryEventHandler there is a line where you are using ...+"" to convert something to a string. This works, but I would prefer to see something more like Stirng.valueOf(...) instead. Other then that I am +1 for the patch. > Coverage fixing for org.apache.hadoop.mapreduce.jobhistory > --- > > Key: HADOOP-9092 > URL: https://issues.apache.org/jira/browse/HADOOP-9092 > Project: Hadoop Common > Issue Type: Test > Components: tools >Reporter: Aleksey Gorshkov > Attachments: HADOOP-9092-branch-0.23.patch, > HADOOP-9092-branch-2.patch, HADOOP-9092-trunk.patch > > > Coverage fixing for package org.apache.hadoop.mapreduce.jobhistory -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction
[ https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504691#comment-13504691 ] Ivan A. Veselovsky commented on HADOOP-9046: Hi, Robert, looks like I found the reason of delay: the main thread locks for long time when waiting to enter the synchronized method org.apache.hadoop.fs.DelegationTokenRenewer.removeRenewAction(T), and this operation is invoked in StateSynchronizer *without* a timeout. I will update the patch when i fix this. Thanks a lot for this catch. > provide unit-test coverage of class > org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction > -- > > Key: HADOOP-9046 > URL: https://issues.apache.org/jira/browse/HADOOP-9046 > Project: Hadoop Common > Issue Type: Test >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Attachments: HADOOP-9046-branch-0.23-over-9049.patch, > HADOOP-9046-branch-0.23.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch > > > The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction has zero > coverage in entire cumulative test run. Provide test(s) to cover this class. > Note: the request submitted to HDFS project because the class likely to be > tested by tests in that project. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9093) Move all the Exception in PathExceptions to o.a.h.fs package
[ https://issues.apache.org/jira/browse/HADOOP-9093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504648#comment-13504648 ] Daryn Sharp commented on HADOOP-9093: - bq. I propose using FileNotFoundException instead of PathNotFoundException as it is already extensively used. Similarly use AccessControlException instead of PathAccessException. If folks agree, I will make that change in the next patch. Alternatively we could at least make these exceptions subclasses of the exception that I am proposing replacing them with. I had considered that when I created these exceptions, but wanted all path exceptions to derive from a common class. I suppose {{PathException}} could be an interface and we copy-n-paste the base code - which is the main factor I chose to derive from a base class. > Move all the Exception in PathExceptions to o.a.h.fs package > > > Key: HADOOP-9093 > URL: https://issues.apache.org/jira/browse/HADOOP-9093 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Fix For: 2.0.3-alpha > > Attachments: HADOOP-9093.patch > > > The exceptions in PathExceptions are useful for non shell related > functionality as well. Making this available as exceptions under fs will help > move some of the HDFS implementation code throw more specific exception than > throwing IOException (for example see HDFS-4209). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction
[ https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504627#comment-13504627 ] Ivan A. Veselovsky commented on HADOOP-9046: One more note: the code that waits for the FS weak ref to dispose executed with 30 sec timeout, so it looks like it cannot be the cause of 7 min slowdown. > provide unit-test coverage of class > org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction > -- > > Key: HADOOP-9046 > URL: https://issues.apache.org/jira/browse/HADOOP-9046 > Project: Hadoop Common > Issue Type: Test >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Attachments: HADOOP-9046-branch-0.23-over-9049.patch, > HADOOP-9046-branch-0.23.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch > > > The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction has zero > coverage in entire cumulative test run. Provide test(s) to cover this class. > Note: the request submitted to HDFS project because the class likely to be > tested by tests in that project. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction
[ https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504626#comment-13504626 ] Ivan A. Veselovsky commented on HADOOP-9046: Hi, Robert, in last pre-commit verification build (above) the test "org.apache.hadoop.fs.TestDelegationTokenRenewer" took 39 seconds: https://builds.apache.org/job/PreCommit-HADOOP-Build/1821//testReport/org.apache.hadoop.fs/TestDelegationTokenRenewer/ (both the test cases). in my desktop the console mvn run of this test takes 38 sec (i do just "mvn clean test -Dtest=org.apache.hadoop.fs.TestDelegationTokenRenewer"). When I run it from within Eclipse, it also takes 18 + 21 = 39 sec. Looking to the code I didn't realize how this test could take 7 minutes. So, can you please provide more details on the conditions of your experiment: 1) did you use the attached "HADOOP-9046-over-9049.patch" against trunk? 2) what JDK you ran the test upon? 3) did the both test cases pass? 4) do other Hadoop tests on your machine run with speed similar to that in Apache Jenkins build machine (https://builds.apache.org) ? 5) what exact command line did you use to run the test(s)? (e.g. was -Pnative included?) 6) is that effect (7 min) reproducible on that machine? 7) There are 2 test cases in the class org.apache.hadoop.fs.TestDelegationTokenRenewer. Which one was slow? Or both? Thanks in advance. > provide unit-test coverage of class > org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction > -- > > Key: HADOOP-9046 > URL: https://issues.apache.org/jira/browse/HADOOP-9046 > Project: Hadoop Common > Issue Type: Test >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Attachments: HADOOP-9046-branch-0.23-over-9049.patch, > HADOOP-9046-branch-0.23.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch > > > The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewAction has zero > coverage in entire cumulative test run. Provide test(s) to cover this class. > Note: the request submitted to HDFS project because the class likely to be > tested by tests in that project. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9093) Move all the Exception in PathExceptions to o.a.h.fs package
[ https://issues.apache.org/jira/browse/HADOOP-9093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504608#comment-13504608 ] Hudson commented on HADOOP-9093: Integrated in Hadoop-Mapreduce-trunk #1270 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1270/]) HADOOP-9093. Move all the Exception in PathExceptions to o.a.h.fs package. Contributed by Suresh Srinivas. (Revision 1413960) Result = FAILURE suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1413960 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathAccessDeniedException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathExistsException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIsDirectoryException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIsNotDirectoryException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIsNotEmptyDirectoryException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathNotFoundException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathOperationException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathPermissionException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathExceptions.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/SetReplication.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Tail.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Test.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Touchz.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java > Move all the Exception in PathExceptions to o.a.h.fs package > > > Key: HADOOP-9093 > URL: https://issues.apache.org/jira/browse/HADOOP-9093 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Fix For: 2.0.3-alpha > > Attachments: HADOOP-9093.patch > > > The exceptions in PathExceptions are useful for non shell related > functionality as well. Making this available as exceptions under fs will help > move some of the HDFS implementation code throw more specific exception than > throwing IOException (for example see HDFS-4209). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9038) provide unit-test coverage of class org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator
[ https://issues.apache.org/jira/browse/HADOOP-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504609#comment-13504609 ] Hudson commented on HADOOP-9038: Integrated in Hadoop-Mapreduce-trunk #1270 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1270/]) HADOOP-9038. unit-tests for AllocatorPerContext.PathIterator (Ivan A. Veselovsky via bobby) (Revision 1413776) Result = FAILURE bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1413776 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalDirAllocator.java > provide unit-test coverage of class > org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator > --- > > Key: HADOOP-9038 > URL: https://issues.apache.org/jira/browse/HADOOP-9038 > Project: Hadoop Common > Issue Type: Test >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Fix For: 3.0.0, 2.0.3-alpha, 0.23.6 > > Attachments: HADOOP-9038.patch > > > The class > org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator > currently has zero unit-test coverage. Add/enhance the tests to provide one. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8992) Enhance unit-test coverage of class HarFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504607#comment-13504607 ] Hudson commented on HADOOP-8992: Integrated in Hadoop-Mapreduce-trunk #1270 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1270/]) HADOOP-8992. Enhance unit-test coverage of class HarFileSystem (Ivan A. Veselovsky via bobby) (Revision 1413743) Result = FAILURE bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1413743 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystemBasics.java > Enhance unit-test coverage of class HarFileSystem > - > > Key: HADOOP-8992 > URL: https://issues.apache.org/jira/browse/HADOOP-8992 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Fix For: 3.0.0, 2.0.3-alpha, 0.23.6 > > Attachments: HADOOP-8992-branch-0.23--a.patch, > HADOOP-8992-branch-0.23--b.patch, HADOOP-8992-branch-0.23--c.patch, > HADOOP-8992-branch-2--a.patch, HADOOP-8992-branch-2--b.patch, > HADOOP-8992-branch-2--c.patch > > > New unit test TestHarFileSystem2 provided in order to enhance coverage of > class HarFileSystem. > Also some unused methods deleted from class HarFileSystem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9016) Provide unit tests for class org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream
[ https://issues.apache.org/jira/browse/HADOOP-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504595#comment-13504595 ] Hadoop QA commented on HADOOP-9016: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12555003/HADOOP-9016--d.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-archives. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1827//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1827//console This message is automatically generated. > Provide unit tests for class > org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream > - > > Key: HADOOP-9016 > URL: https://issues.apache.org/jira/browse/HADOOP-9016 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Attachments: HADOOP-9016--b.patch, HADOOP-9016-branch-0.23--d.patch, > HADOOP-9016--c.patch, HADOOP-9016--d.patch, HADOOP-9016.patch > > > unit-test coverage of classes > org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream, > org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream is > zero. > Suggested to provide unit-tests covering these classes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9038) provide unit-test coverage of class org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator
[ https://issues.apache.org/jira/browse/HADOOP-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504592#comment-13504592 ] Hudson commented on HADOOP-9038: Integrated in Hadoop-Hdfs-trunk #1239 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1239/]) HADOOP-9038. unit-tests for AllocatorPerContext.PathIterator (Ivan A. Veselovsky via bobby) (Revision 1413776) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1413776 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalDirAllocator.java > provide unit-test coverage of class > org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator > --- > > Key: HADOOP-9038 > URL: https://issues.apache.org/jira/browse/HADOOP-9038 > Project: Hadoop Common > Issue Type: Test >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Fix For: 3.0.0, 2.0.3-alpha, 0.23.6 > > Attachments: HADOOP-9038.patch > > > The class > org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator > currently has zero unit-test coverage. Add/enhance the tests to provide one. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9093) Move all the Exception in PathExceptions to o.a.h.fs package
[ https://issues.apache.org/jira/browse/HADOOP-9093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504591#comment-13504591 ] Hudson commented on HADOOP-9093: Integrated in Hadoop-Hdfs-trunk #1239 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1239/]) HADOOP-9093. Move all the Exception in PathExceptions to o.a.h.fs package. Contributed by Suresh Srinivas. (Revision 1413960) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1413960 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathAccessDeniedException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathExistsException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIsDirectoryException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIsNotDirectoryException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIsNotEmptyDirectoryException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathNotFoundException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathOperationException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathPermissionException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathExceptions.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/SetReplication.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Tail.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Test.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Touchz.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java > Move all the Exception in PathExceptions to o.a.h.fs package > > > Key: HADOOP-9093 > URL: https://issues.apache.org/jira/browse/HADOOP-9093 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Fix For: 2.0.3-alpha > > Attachments: HADOOP-9093.patch > > > The exceptions in PathExceptions are useful for non shell related > functionality as well. Making this available as exceptions under fs will help > move some of the HDFS implementation code throw more specific exception than > throwing IOException (for example see HDFS-4209). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8992) Enhance unit-test coverage of class HarFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504590#comment-13504590 ] Hudson commented on HADOOP-8992: Integrated in Hadoop-Hdfs-trunk #1239 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1239/]) HADOOP-8992. Enhance unit-test coverage of class HarFileSystem (Ivan A. Veselovsky via bobby) (Revision 1413743) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1413743 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystemBasics.java > Enhance unit-test coverage of class HarFileSystem > - > > Key: HADOOP-8992 > URL: https://issues.apache.org/jira/browse/HADOOP-8992 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Fix For: 3.0.0, 2.0.3-alpha, 0.23.6 > > Attachments: HADOOP-8992-branch-0.23--a.patch, > HADOOP-8992-branch-0.23--b.patch, HADOOP-8992-branch-0.23--c.patch, > HADOOP-8992-branch-2--a.patch, HADOOP-8992-branch-2--b.patch, > HADOOP-8992-branch-2--c.patch > > > New unit test TestHarFileSystem2 provided in order to enhance coverage of > class HarFileSystem. > Also some unused methods deleted from class HarFileSystem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9098) Add missing license headers
Tom White created HADOOP-9098: - Summary: Add missing license headers Key: HADOOP-9098 URL: https://issues.apache.org/jira/browse/HADOOP-9098 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 1.1.1 Reporter: Tom White Priority: Blocker Fix For: 1.2.0 There are missing license headers in some source files (e.g. TestUnderReplicatedBlocks.java is one) according to the RAT report. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9038) provide unit-test coverage of class org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator
[ https://issues.apache.org/jira/browse/HADOOP-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504583#comment-13504583 ] Hudson commented on HADOOP-9038: Integrated in Hadoop-Hdfs-0.23-Build #448 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/448/]) svn merge -c 1413776 FIXES: HADOOP-9038. unit-tests for AllocatorPerContext.PathIterator (Ivan A. Veselovsky via bobby) (Revision 1413779) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1413779 Files : * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalDirAllocator.java > provide unit-test coverage of class > org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator > --- > > Key: HADOOP-9038 > URL: https://issues.apache.org/jira/browse/HADOOP-9038 > Project: Hadoop Common > Issue Type: Test >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Fix For: 3.0.0, 2.0.3-alpha, 0.23.6 > > Attachments: HADOOP-9038.patch > > > The class > org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator > currently has zero unit-test coverage. Add/enhance the tests to provide one. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8992) Enhance unit-test coverage of class HarFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504582#comment-13504582 ] Hudson commented on HADOOP-8992: Integrated in Hadoop-Hdfs-0.23-Build #448 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/448/]) svn merge -c 1413743 FIXES: HADOOP-8992. Enhance unit-test coverage of class HarFileSystem (Ivan A. Veselovsky via bobby) (Revision 1413746) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1413746 Files : * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystemBasics.java > Enhance unit-test coverage of class HarFileSystem > - > > Key: HADOOP-8992 > URL: https://issues.apache.org/jira/browse/HADOOP-8992 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Fix For: 3.0.0, 2.0.3-alpha, 0.23.6 > > Attachments: HADOOP-8992-branch-0.23--a.patch, > HADOOP-8992-branch-0.23--b.patch, HADOOP-8992-branch-0.23--c.patch, > HADOOP-8992-branch-2--a.patch, HADOOP-8992-branch-2--b.patch, > HADOOP-8992-branch-2--c.patch > > > New unit test TestHarFileSystem2 provided in order to enhance coverage of > class HarFileSystem. > Also some unused methods deleted from class HarFileSystem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9016) Provide unit tests for class org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream
[ https://issues.apache.org/jira/browse/HADOOP-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9016: --- Attachment: HADOOP-9016--d.patch HADOOP-9016-branch-0.23--d.patch The attached patches "HADOOP-9016-branch-0.23--d.patch", "HADOOP-9016--d.patch" are fixed to apply correctly to the current HEAD repository state. The patch "HADOOP-9016--d.patch" targeted to branches "trunk" and "branch-2". The patch "HADOOP-9016-branch-0.23--d.patch" targeted to branch "branch-0.23". > Provide unit tests for class > org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream > - > > Key: HADOOP-9016 > URL: https://issues.apache.org/jira/browse/HADOOP-9016 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Attachments: HADOOP-9016--b.patch, HADOOP-9016-branch-0.23--d.patch, > HADOOP-9016--c.patch, HADOOP-9016--d.patch, HADOOP-9016.patch > > > unit-test coverage of classes > org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream, > org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream is > zero. > Suggested to provide unit-tests covering these classes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files
[ https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504572#comment-13504572 ] Tom White commented on HADOOP-9097: --- It looks like the plugin is configured to only check pom.xml in some places: {noformat} org.apache.rat apache-rat-plugin pom.xml {noformat} We should change this to include everything by default, and list exclusions in cases where it is not possible to add a license header (e.g. binary files). Service loader files (under META-INF), can have headers added since # is recognized as a comment. > Maven RAT plugin is not checking all source files > - > > Key: HADOOP-9097 > URL: https://issues.apache.org/jira/browse/HADOOP-9097 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.3-alpha, 0.23.5 >Reporter: Tom White >Priority: Blocker > Fix For: 2.0.3-alpha, 0.23.6 > > > Running 'mvn apache-rat:check' passes, but running RAT by hand (by > downloading the JAR) produces some warnings for Java files, amongst others. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9097) Maven RAT plugin is not checking all source files
Tom White created HADOOP-9097: - Summary: Maven RAT plugin is not checking all source files Key: HADOOP-9097 URL: https://issues.apache.org/jira/browse/HADOOP-9097 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 2.0.3-alpha, 0.23.5 Reporter: Tom White Priority: Blocker Fix For: 2.0.3-alpha, 0.23.6 Running 'mvn apache-rat:check' passes, but running RAT by hand (by downloading the JAR) produces some warnings for Java files, amongst others. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9038) provide unit-test coverage of class org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator
[ https://issues.apache.org/jira/browse/HADOOP-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504524#comment-13504524 ] Hudson commented on HADOOP-9038: Integrated in Hadoop-Yarn-trunk #49 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/49/]) HADOOP-9038. unit-tests for AllocatorPerContext.PathIterator (Ivan A. Veselovsky via bobby) (Revision 1413776) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1413776 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalDirAllocator.java > provide unit-test coverage of class > org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator > --- > > Key: HADOOP-9038 > URL: https://issues.apache.org/jira/browse/HADOOP-9038 > Project: Hadoop Common > Issue Type: Test >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Fix For: 3.0.0, 2.0.3-alpha, 0.23.6 > > Attachments: HADOOP-9038.patch > > > The class > org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator > currently has zero unit-test coverage. Add/enhance the tests to provide one. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8992) Enhance unit-test coverage of class HarFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504522#comment-13504522 ] Hudson commented on HADOOP-8992: Integrated in Hadoop-Yarn-trunk #49 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/49/]) HADOOP-8992. Enhance unit-test coverage of class HarFileSystem (Ivan A. Veselovsky via bobby) (Revision 1413743) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1413743 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystemBasics.java > Enhance unit-test coverage of class HarFileSystem > - > > Key: HADOOP-8992 > URL: https://issues.apache.org/jira/browse/HADOOP-8992 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Fix For: 3.0.0, 2.0.3-alpha, 0.23.6 > > Attachments: HADOOP-8992-branch-0.23--a.patch, > HADOOP-8992-branch-0.23--b.patch, HADOOP-8992-branch-0.23--c.patch, > HADOOP-8992-branch-2--a.patch, HADOOP-8992-branch-2--b.patch, > HADOOP-8992-branch-2--c.patch > > > New unit test TestHarFileSystem2 provided in order to enhance coverage of > class HarFileSystem. > Also some unused methods deleted from class HarFileSystem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9093) Move all the Exception in PathExceptions to o.a.h.fs package
[ https://issues.apache.org/jira/browse/HADOOP-9093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13504523#comment-13504523 ] Hudson commented on HADOOP-9093: Integrated in Hadoop-Yarn-trunk #49 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/49/]) HADOOP-9093. Move all the Exception in PathExceptions to o.a.h.fs package. Contributed by Suresh Srinivas. (Revision 1413960) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1413960 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathAccessDeniedException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathExistsException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIsDirectoryException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIsNotDirectoryException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIsNotEmptyDirectoryException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathNotFoundException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathOperationException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathPermissionException.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathExceptions.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/SetReplication.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Tail.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Test.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Touchz.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java > Move all the Exception in PathExceptions to o.a.h.fs package > > > Key: HADOOP-9093 > URL: https://issues.apache.org/jira/browse/HADOOP-9093 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Fix For: 2.0.3-alpha > > Attachments: HADOOP-9093.patch > > > The exceptions in PathExceptions are useful for non shell related > functionality as well. Making this available as exceptions under fs will help > move some of the HDFS implementation code throw more specific exception than > throwing IOException (for example see HDFS-4209). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira