[jira] [Commented] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-05-22 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664881#comment-13664881
 ] 

Ivan Mitic commented on HADOOP-9438:


bq. I have just created the jira issue: 
https://issues.apache.org/jira/browse/MAPREDUCE-5264. I will work on it soon.
Thanks Remy, much appreciated!

> LocalFileContext does not throw an exception on mkdir for already existing 
> directory
> 
>
> Key: HADOOP-9438
> URL: https://issues.apache.org/jira/browse/HADOOP-9438
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: HADOOP-9438.20130501.1.patch, 
> HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch
>
>
> according to 
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
> should throw a FileAlreadyExistsException if the directory already exists.
> I tested this and 
> {code}
> FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
> Path p = new Path("/tmp/bobby.12345");
> FsPermission cachePerms = new FsPermission((short) 0755);
> lfc.mkdir(p, cachePerms, false);
> lfc.mkdir(p, cachePerms, false);
> {code}
> never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9517) Define Hadoop Compatibility

2013-05-22 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664734#comment-13664734
 ] 

Alejandro Abdelnur commented on HADOOP-9517:


Shouldn't the proto files themselves be classified as public and stable?

> Define Hadoop Compatibility
> ---
>
> Key: HADOOP-9517
> URL: https://issues.apache.org/jira/browse/HADOOP-9517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arun C Murthy
>Assignee: Karthik Kambatla
> Attachments: hadoop-9517.patch, hadoop-9517.patch, hadoop-9517.patch, 
> hadoop-9517.patch, hadoop-9517-proposal-v1.patch
>
>
> As we get ready to call hadoop-2 stable we need to better define 'Hadoop 
> Compatibility'.
> http://wiki.apache.org/hadoop/Compatibility is a start, let's document 
> requirements clearly and completely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9517) Define Hadoop Compatibility

2013-05-22 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9517:
-

Attachment: hadoop-9517-proposal-v1.patch

Uploading a patch addressing comments from Sanjay, Steve and Doug:
# Add additional compatibility sections,
# Add newly proposed policies clearly annotated as (Proposal)

> Define Hadoop Compatibility
> ---
>
> Key: HADOOP-9517
> URL: https://issues.apache.org/jira/browse/HADOOP-9517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arun C Murthy
>Assignee: Karthik Kambatla
> Attachments: hadoop-9517.patch, hadoop-9517.patch, hadoop-9517.patch, 
> hadoop-9517.patch, hadoop-9517-proposal-v1.patch
>
>
> As we get ready to call hadoop-2 stable we need to better define 'Hadoop 
> Compatibility'.
> http://wiki.apache.org/hadoop/Compatibility is a start, let's document 
> requirements clearly and completely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8981) TestMetricsSystemImpl fails on Windows

2013-05-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664609#comment-13664609
 ] 

Suresh Srinivas commented on HADOOP-8981:
-

I committed the branch-1 patch to branch-1 and branch-1-win. Thank you Chris.

> TestMetricsSystemImpl fails on Windows
> --
>
> Key: HADOOP-8981
> URL: https://issues.apache.org/jira/browse/HADOOP-8981
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.0.0-alpha
>Reporter: Chris Nauroth
>Assignee: Xuan Gong
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8981-branch-1.patch, 
> HADOOP-8981-branch-trunk-win.1.patch, HADOOP-8981-branch-trunk-win.2.patch, 
> HADOOP-8981-branch-trunk-win.3.patch, HADOOP-8981-branch-trunk-win.4.patch, 
> HADOOP-8981-branch-trunk-win.5.patch
>
>
> The test is failing on an expected mock interaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8981) TestMetricsSystemImpl fails on Windows

2013-05-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664602#comment-13664602
 ] 

Suresh Srinivas commented on HADOOP-8981:
-

+1 for the branch-1 patch.

> TestMetricsSystemImpl fails on Windows
> --
>
> Key: HADOOP-8981
> URL: https://issues.apache.org/jira/browse/HADOOP-8981
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.0.0-alpha
>Reporter: Chris Nauroth
>Assignee: Xuan Gong
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8981-branch-1.patch, 
> HADOOP-8981-branch-trunk-win.1.patch, HADOOP-8981-branch-trunk-win.2.patch, 
> HADOOP-8981-branch-trunk-win.3.patch, HADOOP-8981-branch-trunk-win.4.patch, 
> HADOOP-8981-branch-trunk-win.5.patch
>
>
> The test is failing on an expected mock interaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8745) Incorrect version numbers in hadoop-core POM

2013-05-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664597#comment-13664597
 ] 

Suresh Srinivas commented on HADOOP-8745:
-

I have merged the patch to branch-1-win.

> Incorrect version numbers in hadoop-core POM
> 
>
> Key: HADOOP-8745
> URL: https://issues.apache.org/jira/browse/HADOOP-8745
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Matthias Friedrich
>Assignee: Matthias Friedrich
>Priority: Minor
> Fix For: 1.1.1
>
> Attachments: HADOOP-8745-branch-1.0.patch
>
>
> The hadoop-core POM as published to Maven central has different dependency 
> versions than Hadoop actually has on its runtime classpath. This can lead to 
> client code working in unit tests but failing on the cluster and vice versa.
> The following version numbers are incorrect: jackson-mapper-asl, kfs, and 
> jets3t. There's also a duplicate dependency to commons-net.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9582) Non-existent file to "hadoop fs -conf" doesn't throw error

2013-05-22 Thread Ashwin Shankar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664558#comment-13664558
 ] 

Ashwin Shankar commented on HADOOP-9582:


{quote}I've been looking at this a bit more, and now I'm worried about some 
compatibility issue. It looks like -conf can be used to specify any resource 
path. Is this right, or am I misreading it?{quote}
I'm not sure whether URIs are allowed for '-conf' option.I checked up the 
documentation and javadoc, it doesn't talk about URIs. But I would tend to 
believe that URIs are allowed ,since its allowed for other options like hadoop 
fs -fs .If this is the case,then you're right,we have a problem.

{quote}At the same time, delving into Configuration.loadResource() looks like 
it gets the URI of the Path instance, calls getPath() on it and then converts 
it to a file for loading (skipping missing entries).{quote}
Yes,that's right. Here is where it gets interesting and I have a question for 
you.
Configuration.loadResource() has the following code :
{code}
if (doc == null && root == null) {
if (quiet)
  return null;
throw new RuntimeException(resource + " not found");
  }
{code}
Looking at the code,it seems that if 'file' doesn't exist we do throw 
RuntimeException if the 'quiet' flag is false,which means removing the 'quiet' 
flag seems to solve our problem.My question is why is there a 'quiet' flag in 
the first place ? I tried to look at svn file history and this particular code 
snippet goes back to early days(2006),hence no documentation about it. 
Do you know about this flag ?
Is there some ancient use-case where its legal to give a non-existent config 
file as arguments while loading it 'quietly' ? because this patch will break 
that.   

> Non-existent file to "hadoop fs -conf" doesn't throw error
> --
>
> Key: HADOOP-9582
> URL: https://issues.apache.org/jira/browse/HADOOP-9582
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
>Reporter: Ashwin Shankar
> Attachments: HADOOP-9582.txt, HADOOP-9582.txt
>
>
> When we run :
> hadoop fs -conf BAD_FILE -ls /
> we expect hadoop to throw an error,but it doesn't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8982) TestSocketIOWithTimeout fails on Windows

2013-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664476#comment-13664476
 ] 

Hadoop QA commented on HADOOP-8982:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12584366/HADOOP-8982.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2558//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2558//console

This message is automatically generated.

> TestSocketIOWithTimeout fails on Windows
> 
>
> Key: HADOOP-8982
> URL: https://issues.apache.org/jira/browse/HADOOP-8982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8982.1.patch, HADOOP-8982.2.patch
>
>
> This is a possible race condition or difference in socket handling on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8982) TestSocketIOWithTimeout fails on Windows

2013-05-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664471#comment-13664471
 ] 

Arpit Agarwal commented on HADOOP-8982:
---

Hi Chris,

I am not confident about this statement and I think we should just remove it.
{quote}
Windows appears to buffer large amounts of written data and send it all 
atomically, thus making it impossible to simulate a partial write scenario.
{quote}

It is equally likely the data is buffered on the peer's receiving socket. 
Ideally we rewrite the test later. :)

+1 otherwise.

Thanks,
Arpit

> TestSocketIOWithTimeout fails on Windows
> 
>
> Key: HADOOP-8982
> URL: https://issues.apache.org/jira/browse/HADOOP-8982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8982.1.patch, HADOOP-8982.2.patch
>
>
> This is a possible race condition or difference in socket handling on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8982) TestSocketIOWithTimeout fails on Windows

2013-05-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8982:
--

Status: Patch Available  (was: Open)

> TestSocketIOWithTimeout fails on Windows
> 
>
> Key: HADOOP-8982
> URL: https://issues.apache.org/jira/browse/HADOOP-8982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8982.1.patch, HADOOP-8982.2.patch
>
>
> This is a possible race condition or difference in socket handling on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8982) TestSocketIOWithTimeout fails on Windows

2013-05-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8982:
--

Attachment: HADOOP-8982.2.patch

{quote}
I am not sure his analysis is correct. Also he tested with Python so it may be 
changing the default buffering options.
{quote}

In that case, I'd prefer not to mention that information in the comment.  Here 
is an updated patch that removes the hyperlink from the comment.

> TestSocketIOWithTimeout fails on Windows
> 
>
> Key: HADOOP-8982
> URL: https://issues.apache.org/jira/browse/HADOOP-8982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8982.1.patch, HADOOP-8982.2.patch
>
>
> This is a possible race condition or difference in socket handling on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8982) TestSocketIOWithTimeout fails on Windows

2013-05-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664433#comment-13664433
 ] 

Arpit Agarwal commented on HADOOP-8982:
---

By 'his analysis' I meant the one on linked page.

> TestSocketIOWithTimeout fails on Windows
> 
>
> Key: HADOOP-8982
> URL: https://issues.apache.org/jira/browse/HADOOP-8982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8982.1.patch
>
>
> This is a possible race condition or difference in socket handling on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8982) TestSocketIOWithTimeout fails on Windows

2013-05-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664428#comment-13664428
 ] 

Arpit Agarwal commented on HADOOP-8982:
---

I am not sure his analysis is correct. Also he tested with Python so it may be 
changing the default buffering options.

Skipping it is reasonable. The problem with this kind of test is its behavior 
may change per combination of OS/JVM version.

> TestSocketIOWithTimeout fails on Windows
> 
>
> Key: HADOOP-8982
> URL: https://issues.apache.org/jira/browse/HADOOP-8982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8982.1.patch
>
>
> This is a possible race condition or difference in socket handling on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8982) TestSocketIOWithTimeout fails on Windows

2013-05-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8982:
--

Attachment: HADOOP-8982.1.patch

After many, many failed attempts to simulate partial writes on Windows, I'm 
submitting a patch that bypasses just the portion of the test that depends on 
partial write when running on Windows.

The earlier analysis is correct that the test is not able to observe partial 
write on Windows.  At first, I thought the behavior was strictly due to 
differences in implementation of pipes between Linux and Windows, so I tried 
changing the test to use a real network socket.  Then, I tried ever-increasing 
write buffer sizes.  None of my attempts successfully simulated partial write 
on Windows.  At this point, I think we need to skip this portion of the test on 
Windows.

This page lists some interesting differences between Unix and Windows sockets:

http://itamarst.org/writings/win32sockets.html

The last point is particularly relevant.

> TestSocketIOWithTimeout fails on Windows
> 
>
> Key: HADOOP-8982
> URL: https://issues.apache.org/jira/browse/HADOOP-8982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8982.1.patch
>
>
> This is a possible race condition or difference in socket handling on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9592) libhdfs append test fails

2013-05-22 Thread Giridharan Kesavan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664349#comment-13664349
 ] 

Giridharan Kesavan commented on HADOOP-9592:


hadoop-8230 removed default append support. To fix the libhdfs test failure we 
should either remove the append test or set dfs.support.broken.append to true. 


> libhdfs append test fails
> -
>
> Key: HADOOP-9592
> URL: https://issues.apache.org/jira/browse/HADOOP-9592
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Giridharan Kesavan
>
>  [exec] Wrote 6 bytes
>  [exec] Flushed /tmp/appends successfully!
>  [exec] Exception in thread "main" org.apache.hadoop.ipc.RemoteException: 
> java.io.IOException: Append is not supported. Please see the 
> dfs.support.append configuration parameter
>  [exec]   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1781)
>  [exec]   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:725)
>  [exec]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  [exec]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  [exec]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  [exec]   at java.lang.reflect.Method.invoke(Method.java:597)
>  [exec]   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>  [exec]   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>  [exec]   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>  [exec]   at java.security.AccessController.doPrivileged(Native Method)
>  [exec]   at javax.security.auth.Subject.doAs(Subject.java:396)
>  [exec]   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>  [exec]   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>  [exec] 
>  [exec]   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>  [exec]   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>  [exec]   at $Proxy1.append(Unknown Source)
>  [exec]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  [exec]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  [exec]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  [exec]   at java.lang.reflect.Method.invoke(Method.java:597)
>  [exec]   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>  [exec]   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>  [exec]   at $Proxy1.append(Unknown Source)
>  [exec]   at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:933)
>  [exec]   at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:922)
>  [exec]   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:196)
>  [exec]   at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:650)
>  [exec] Call to 
> org.apache.hadoop.conf.FileSystem::append((Lorg/apache/hadoop/fs/Path;)Lorg/apache/hadoop/fs/FSDataOutputStream;)
>  failed!
>  [exec] Failed to open /tmp/appends for writing!
>  [exec] Warning: $HADOOP_HOME is deprecated.
>  [exec] 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9582) Non-existent file to "hadoop fs -conf" doesn't throw error

2013-05-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664341#comment-13664341
 ] 

Steve Loughran commented on HADOOP-9582:


My current (YARN-117) fork of Hadoop trunk has an extra test in Configuration 
-it doesn't load empty files. This is the patch from HADOOP-9453, adding a 
{{file.length() > 0}} to the criteria, and stopping the XML parser failing on 
an empty file.

Should your -conf code reject empty files too? Today that triggers the XML 
parser failure -it's hard to see why we'd want to permit it once HADOOP-9453 
gets checked in

> Non-existent file to "hadoop fs -conf" doesn't throw error
> --
>
> Key: HADOOP-9582
> URL: https://issues.apache.org/jira/browse/HADOOP-9582
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
>Reporter: Ashwin Shankar
> Attachments: HADOOP-9582.txt, HADOOP-9582.txt
>
>
> When we run :
> hadoop fs -conf BAD_FILE -ls /
> we expect hadoop to throw an error,but it doesn't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9592) libhdfs append test fails

2013-05-22 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9592:
--

 Summary: libhdfs append test fails
 Key: HADOOP-9592
 URL: https://issues.apache.org/jira/browse/HADOOP-9592
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Giridharan Kesavan




 [exec] Wrote 6 bytes
 [exec] Flushed /tmp/appends successfully!
 [exec] Exception in thread "main" org.apache.hadoop.ipc.RemoteException: 
java.io.IOException: Append is not supported. Please see the dfs.support.append 
configuration parameter
 [exec] at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1781)
 [exec] at 
org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:725)
 [exec] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 [exec] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 [exec] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 [exec] at java.lang.reflect.Method.invoke(Method.java:597)
 [exec] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
 [exec] at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
 [exec] at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
 [exec] at java.security.AccessController.doPrivileged(Native Method)
 [exec] at javax.security.auth.Subject.doAs(Subject.java:396)
 [exec] at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
 [exec] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
 [exec] 
 [exec] at org.apache.hadoop.ipc.Client.call(Client.java:1107)
 [exec] at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
 [exec] at $Proxy1.append(Unknown Source)
 [exec] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 [exec] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 [exec] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 [exec] at java.lang.reflect.Method.invoke(Method.java:597)
 [exec] at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
 [exec] at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
 [exec] at $Proxy1.append(Unknown Source)
 [exec] at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:933)
 [exec] at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:922)
 [exec] at 
org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:196)
 [exec] at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:650)
 [exec] Call to 
org.apache.hadoop.conf.FileSystem::append((Lorg/apache/hadoop/fs/Path;)Lorg/apache/hadoop/fs/FSDataOutputStream;)
 failed!
 [exec] Failed to open /tmp/appends for writing!
 [exec] Warning: $HADOOP_HOME is deprecated.
 [exec] 




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8886) Remove KFS support

2013-05-22 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8886:


Fix Version/s: 2.0.5-beta

I committed the patch to branch-2.

> Remove KFS support
> --
>
> Key: HADOOP-8886
> URL: https://issues.apache.org/jira/browse/HADOOP-8886
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 3.0.0, 2.0.5-beta
>
> Attachments: hadoop-8886.txt
>
>
> KFS is no longer maintained (is replaced by QFS, which HADOOP-8885 is 
> adding), let's remove it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-05-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664307#comment-13664307
 ] 

Suresh Srinivas commented on HADOOP-9194:
-

I also move this change from NEW FEATURE section to INCOMPATIBLE CHANGES in 
branch-2 CHANGES.txt.

> RPC Support for QoS
> ---
>
> Key: HADOOP-9194
> URL: https://issues.apache.org/jira/browse/HADOOP-9194
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Affects Versions: 2.0.2-alpha
>Reporter: Luke Lu
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HADOOP-9194.patch, HADOOP-9194-v2.patch
>
>
> One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
> We need QoS support to fight the inevitable "buffer bloat" (including various 
> queues, which are probably necessary for throughput) in our software stack. 
> This is important for mixed workload with different latency and throughput 
> requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
> same DFS.
> Any potential bottleneck will need to be managed by QoS mechanisms, starting 
> with RPC. 
> How about adding a one byte DS (differentiated services) field (a la the 
> 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
> mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
> the header is helpful for implementing high performance QoS mechanisms in 
> switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9527) TestLocalFSFileContextSymlink is broken on Windows

2013-05-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664306#comment-13664306
 ] 

Arpit Agarwal commented on HADOOP-9527:
---

Ivan, thanks for reviewing and posting your feedback.

I think your suggestion is reasonable but I need to understand it better.

> TestLocalFSFileContextSymlink is broken on Windows
> --
>
> Key: HADOOP-9527
> URL: https://issues.apache.org/jira/browse/HADOOP-9527
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0
>
> Attachments: HADOOP-9527.001.patch, HADOOP-9527.002.patch, 
> HADOOP-9527.003.patch, HADOOP-9527.004.patch, HADOOP-9527.005.patch, 
> HADOOP-9527.006.patch, HADOOP-9527.007.patch, RenameLink.java
>
>
> Multiple test cases are broken. I didn't look at each failure in detail.
> The main cause of the failures appears to be that RawLocalFS.readLink() does 
> not work on Windows. We need "winutils readlink" to fix the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-05-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664304#comment-13664304
 ] 

Suresh Srinivas commented on HADOOP-9194:
-

Luke, please update the trunk CHANGES.txt once a change gets ported to 2.x, to 
move the jira to appropriate section. I have made that change for this jira 
description in trunk.

> RPC Support for QoS
> ---
>
> Key: HADOOP-9194
> URL: https://issues.apache.org/jira/browse/HADOOP-9194
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Affects Versions: 2.0.2-alpha
>Reporter: Luke Lu
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HADOOP-9194.patch, HADOOP-9194-v2.patch
>
>
> One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
> We need QoS support to fight the inevitable "buffer bloat" (including various 
> queues, which are probably necessary for throughput) in our software stack. 
> This is important for mixed workload with different latency and throughput 
> requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
> same DFS.
> Any potential bottleneck will need to be managed by QoS mechanisms, starting 
> with RPC. 
> How about adding a one byte DS (differentiated services) field (a la the 
> 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
> mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
> the header is helpful for implementing high performance QoS mechanisms in 
> switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9425) Add error codes to rpc-response

2013-05-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664301#comment-13664301
 ] 

Suresh Srinivas commented on HADOOP-9425:
-

Sanjay, after merging a change to branch-2, please update the trunk CHANGES.txt 
and move the jira description to appropriate 2.x section. I have made these 
changes for your recent changes.

> Add error codes to rpc-response
> ---
>
> Key: HADOOP-9425
> URL: https://issues.apache.org/jira/browse/HADOOP-9425
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
> Fix For: 2.0.5-beta
>
> Attachments: HADOOP-9425-1.patch, HADOOP-9425-2.patch, 
> HADOOP-9425-3.patch, HADOOP-9425-4.patch, HADOOP-9425-5.patch, 
> HADOOP-9425-6.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8562) Enhancements to support Hadoop on Windows Server and Windows Azure environments

2013-05-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664290#comment-13664290
 ] 

Chris Nauroth commented on HADOOP-8562:
---

+1 for merge to branch-2.  Thanks, Suresh!

> Enhancements to support Hadoop on Windows Server and Windows Azure 
> environments
> ---
>
> Key: HADOOP-8562
> URL: https://issues.apache.org/jira/browse/HADOOP-8562
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Bikas Saha
>Assignee: Bikas Saha
> Fix For: 3.0.0
>
> Attachments: branch-trunk-win.min-notest.patch, 
> branch-trunk-win-min.patch, branch-trunk-win.min.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, test-untar.tar, test-untar.tgz
>
>
> This JIRA tracks the work that needs to be done on trunk to enable Hadoop to 
> run on Windows Server and Azure environments. This incorporates porting 
> relevant work from the similar effort on branch 1 tracked via HADOOP-8079.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8562) Enhancements to support Hadoop on Windows Server and Windows Azure environments

2013-05-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664287#comment-13664287
 ] 

Suresh Srinivas commented on HADOOP-8562:
-

I plan on merging changes from this jira and related jiras to branch-2 by the 
end of the day, per my previous comment.

> Enhancements to support Hadoop on Windows Server and Windows Azure 
> environments
> ---
>
> Key: HADOOP-8562
> URL: https://issues.apache.org/jira/browse/HADOOP-8562
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Bikas Saha
>Assignee: Bikas Saha
> Fix For: 3.0.0
>
> Attachments: branch-trunk-win.min-notest.patch, 
> branch-trunk-win-min.patch, branch-trunk-win.min.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, test-untar.tar, test-untar.tgz
>
>
> This JIRA tracks the work that needs to be done on trunk to enable Hadoop to 
> run on Windows Server and Azure environments. This incorporates porting 
> relevant work from the similar effort on branch 1 tracked via HADOOP-8079.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8886) Remove KFS support

2013-05-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664281#comment-13664281
 ] 

Suresh Srinivas commented on HADOOP-8886:
-

[~t.st.clair] I will push this change into 2.x.

> Remove KFS support
> --
>
> Key: HADOOP-8886
> URL: https://issues.apache.org/jira/browse/HADOOP-8886
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 3.0.0
>
> Attachments: hadoop-8886.txt
>
>
> KFS is no longer maintained (is replaced by QFS, which HADOOP-8885 is 
> adding), let's remove it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9582) Non-existent file to "hadoop fs -conf" doesn't throw error

2013-05-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664282#comment-13664282
 ] 

Steve Loughran commented on HADOOP-9582:


I've been looking at this a bit more, and now I'm worried about some 
compatibility issue. It looks like {{-conf}} can be used to specify any 
resource path. Is this right, or am I misreading it?

If I am right, I could go {{--conf ftp://common/stdconf.xml}}, or {{--conf 
http://service/cluster-site.xml}}.

This could be handy -so I'm worried that some people may already be using it. 
If they are, we've got a problem.

At the same time, delving into {{Configuration.loadResource()}} looks like it 
gets the URI of the {{Path}} instance, calls {{getPath()}} on it and then 
converts it to a file for loading (skipping missing entries).

Is that right? That the {{Path}} representation of a command line path is 
simply an intermediate state -and that it doesn't interfere with the 
intermediate state.

If backwards compatibility isn't at risk -which I think is the case- then this 
check is something we can keep in, though it may be safer to build the 

{code}
  resource = new Path(value)
  File confFile = new File((resource).toUri().getPath()); //from Configuration
  if (! confFile.exists()) { throw ... }
  conf.addResource(resource);
{code}
The advantage with this approach is we can be confident the relative path 
resolution logic is consistent in both places. Which is something that tests 
could look for as well.

(I'm just adding extra work here, aren't I? Sorry)



> Non-existent file to "hadoop fs -conf" doesn't throw error
> --
>
> Key: HADOOP-9582
> URL: https://issues.apache.org/jira/browse/HADOOP-9582
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
>Reporter: Ashwin Shankar
> Attachments: HADOOP-9582.txt, HADOOP-9582.txt
>
>
> When we run :
> hadoop fs -conf BAD_FILE -ls /
> we expect hadoop to throw an error,but it doesn't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-05-22 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664260#comment-13664260
 ] 

Luke Lu commented on HADOOP-9421:
-

bq. When it gets an RPC error, such as invalid rpc version or unsupported 
serialization, it blows up trying to decode it as SASL.

The server should reply with client version appropriate response, see 
Server#setupBadVersionResponse, which currently doesn't handle sasl correctly 
for >=9 case, because the code not finished yet :). But this is a separate 
issue. Stuffing SASL exchange in RpcRequestHeaderProto seems to be hack to me 
to avoid fixing the bad version response. 

> Convert SASL to use ProtoBuf and add lengths for non-blocking processing
> 
>
> Key: HADOOP-9421
> URL: https://issues.apache.org/jira/browse/HADOOP-9421
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.0.3-alpha
>Reporter: Sanjay Radia
>Assignee: Daryn Sharp
> Attachments: HADOOP-9421.patch, HADOOP-9421-v2-demo.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9582) Non-existent file to "hadoop fs -conf" doesn't throw error

2013-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664238#comment-13664238
 ] 

Hadoop QA commented on HADOOP-9582:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12584323/HADOOP-9582.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2557//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2557//console

This message is automatically generated.

> Non-existent file to "hadoop fs -conf" doesn't throw error
> --
>
> Key: HADOOP-9582
> URL: https://issues.apache.org/jira/browse/HADOOP-9582
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
>Reporter: Ashwin Shankar
> Attachments: HADOOP-9582.txt, HADOOP-9582.txt
>
>
> When we run :
> hadoop fs -conf BAD_FILE -ls /
> we expect hadoop to throw an error,but it doesn't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9590) Move to JDK7 improved APIs for file operations when available

2013-05-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664224#comment-13664224
 ] 

Chris Nauroth commented on HADOOP-9590:
---

Thanks for submitting this jira and documenting all of the impacts.  I'd also 
like to add to the list that {{java.nio.file.Files#move}} will be nice to have 
for things like moving the new fsimage into place with overwrite after a 
successful checkpoint.

[http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#move(java.nio.file.Path,
 java.nio.file.Path, java.nio.file.CopyOption...)]

That can help us avoid platform-specific workarounds, like 
{{FSImage#renameCheckpointInDir}}:

{code}
  private void renameCheckpointInDir(StorageDirectory sd, long txid)
  throws IOException {
File ckpt = NNStorage.getStorageFile(sd, NameNodeFile.IMAGE_NEW, txid);
File curFile = NNStorage.getStorageFile(sd, NameNodeFile.IMAGE, txid);
// renameTo fails on Windows if the destination file 
// already exists.
if(LOG.isDebugEnabled()) {
  LOG.debug("renaming  " + ckpt.getAbsolutePath() 
+ " to " + curFile.getAbsolutePath());
}
if (!ckpt.renameTo(curFile)) {
  if (!curFile.delete() || !ckpt.renameTo(curFile)) {
throw new IOException("renaming  " + ckpt.getAbsolutePath() + " to "  + 
curFile.getAbsolutePath() + " FAILED");
  }
}
  }
{code}


> Move to JDK7 improved APIs for file operations when available
> -
>
> Key: HADOOP-9590
> URL: https://issues.apache.org/jira/browse/HADOOP-9590
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ivan Mitic
>
> JDK6 does not have a complete support for local file system file operations. 
> Specifically:
> - There is no symlink/hardlink APIs what forced Hadoop to defer to shell 
> based tooling
> - No error information returned when File#mkdir/mkdirs or File#renameTo 
> fails, making it unnecessary hard to troubleshoot some issues
> - File#canRead/canWrite/canExecute do not perform any access checks on 
> Windows making APIs inconsistent with the Unix behavior
> - File#setReadable/setWritable/setExecutable do not change access rights on 
> Windows making APIs inconsistent with the Unix behavior
> - File#length does not work as expected on symlinks on Windows
> - File#renameTo does not work as expected on symlinks on Windows
> All above resulted in Hadoop community having to fill in the gaps by 
> providing equivalent native implementations or applying workarounds. 
> JDK7 addressed (as far as I know) all (or most) of the above problems, either 
> thru the newly introduced 
> [Files|http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html] 
> class or thru bug fixes.
> This is a tracking Jira to revisit above mediations once JDK7 becomes the 
> supported platform by the Hadoop community. This work would allow significant 
> portion of the native platform-dependent code to be replaced with Java 
> equivalents what is goodness w.r.t. Hadoop cross-platform support. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8886) Remove KFS support

2013-05-22 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664211#comment-13664211
 ] 

Timothy St. Clair commented on HADOOP-8886:
---

Is there a reason this was not applied to the 2.X series? 

> Remove KFS support
> --
>
> Key: HADOOP-8886
> URL: https://issues.apache.org/jira/browse/HADOOP-8886
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 3.0.0
>
> Attachments: hadoop-8886.txt
>
>
> KFS is no longer maintained (is replaced by QFS, which HADOOP-8885 is 
> adding), let's remove it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9582) Non-existent file to "hadoop fs -conf" doesn't throw error

2013-05-22 Thread Ashwin Shankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwin Shankar updated HADOOP-9582:
---

Attachment: HADOOP-9582.txt

Added AssertionFailedError in unit tests and corrected spacing issues.

> Non-existent file to "hadoop fs -conf" doesn't throw error
> --
>
> Key: HADOOP-9582
> URL: https://issues.apache.org/jira/browse/HADOOP-9582
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
>Reporter: Ashwin Shankar
> Attachments: HADOOP-9582.txt, HADOOP-9582.txt
>
>
> When we run :
> hadoop fs -conf BAD_FILE -ls /
> we expect hadoop to throw an error,but it doesn't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-05-22 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664197#comment-13664197
 ] 

Daryn Sharp commented on HADOOP-9421:
-

I've got a patch working.  BUT... A pre-existing problem is that after the 
connection header is sent, the client expects to be in sasl mode.  When it gets 
an RPC error, such as invalid rpc version or unsupported serialization, it 
blows up trying to decode it as SASL.  I think I am going to have to wrap the 
SASL exchange with {{RpcRequestHeaderProto}} so the client can determine if the 
response is a RPC or SASL protobuf.

> Convert SASL to use ProtoBuf and add lengths for non-blocking processing
> 
>
> Key: HADOOP-9421
> URL: https://issues.apache.org/jira/browse/HADOOP-9421
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.0.3-alpha
>Reporter: Sanjay Radia
>Assignee: Daryn Sharp
> Attachments: HADOOP-9421.patch, HADOOP-9421-v2-demo.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9581) Hadoop --config non-existent directory should result in error

2013-05-22 Thread Ashwin Shankar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664185#comment-13664185
 ] 

Ashwin Shankar commented on HADOOP-9581:


This fix was in shell scripts which don't have unit tests.
I've manually verified it.

> Hadoop --config non-existent directory should result in error 
> --
>
> Key: HADOOP-9581
> URL: https://issues.apache.org/jira/browse/HADOOP-9581
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
>Reporter: Ashwin Shankar
> Attachments: HADOOP-9581.txt
>
>
> Courtesy : [~cwchung]
> {quote}Providing a non-existent config directory should result in error.
> $ hadoop dfs -ls /  : shows Hadoop DFS directory
> $ hadoop --config bad_config_dir dfs -ls : successful, showing Linux directory
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-05-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664136#comment-13664136
 ] 

Steve Loughran commented on HADOOP-9438:


While looking at this, I am starting to suspect that 
{{ParentNotDirectoryException}} doesn't get thrown for a lot of the 
{{FileSystem}} implementations. Would we be able to get away with changing the 
exception that is thrown to this specific error code? As long as things were 
expecting any {{IOException}} -as the tests do- this should still work.

> LocalFileContext does not throw an exception on mkdir for already existing 
> directory
> 
>
> Key: HADOOP-9438
> URL: https://issues.apache.org/jira/browse/HADOOP-9438
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: HADOOP-9438.20130501.1.patch, 
> HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch
>
>
> according to 
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
> should throw a FileAlreadyExistsException if the directory already exists.
> I tested this and 
> {code}
> FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
> Path p = new Path("/tmp/bobby.12345");
> FsPermission cachePerms = new FsPermission((short) 0755);
> lfc.mkdir(p, cachePerms, false);
> lfc.mkdir(p, cachePerms, false);
> {code}
> never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9582) Non-existent file to "hadoop fs -conf" doesn't throw error

2013-05-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664005#comment-13664005
 ] 

Steve Loughran commented on HADOOP-9582:


Looks good, both code and tests.
# there's a couple of places where an extra space or two between things would 
comply more with the style guidelines
# the final {{assertTrue}} could be better on diagnostics if a different fault 
is raised. A {{throw new AssertionFailedError("Expected FileNotFound, got: 
"+th).initCause(th);}} would do this.


> Non-existent file to "hadoop fs -conf" doesn't throw error
> --
>
> Key: HADOOP-9582
> URL: https://issues.apache.org/jira/browse/HADOOP-9582
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
>Reporter: Ashwin Shankar
> Attachments: HADOOP-9582.txt
>
>
> When we run :
> hadoop fs -conf BAD_FILE -ls /
> we expect hadoop to throw an error,but it doesn't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9590) Move to JDK7 improved APIs for file operations when available

2013-05-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13663991#comment-13663991
 ] 

Steve Loughran commented on HADOOP-9590:


Not so much "the supported platform" as "the minimum platform supported by 
Hadoop". This is something that could maybe be considered in the 2.2.+/3.x 
timeframe.

Note that currently you can't build Hadoop on Java7 on OS/X, see HADOOP-9350 
for the specifics. Getting Hadoop to build and run reliably on Java7 is a 
prerequisite to this.

Maybe we should create two meta-JIRA's "build and run on Java7 everywhere" and 
only after that "adopt java-7 specific features".

> Move to JDK7 improved APIs for file operations when available
> -
>
> Key: HADOOP-9590
> URL: https://issues.apache.org/jira/browse/HADOOP-9590
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ivan Mitic
>
> JDK6 does not have a complete support for local file system file operations. 
> Specifically:
> - There is no symlink/hardlink APIs what forced Hadoop to defer to shell 
> based tooling
> - No error information returned when File#mkdir/mkdirs or File#renameTo 
> fails, making it unnecessary hard to troubleshoot some issues
> - File#canRead/canWrite/canExecute do not perform any access checks on 
> Windows making APIs inconsistent with the Unix behavior
> - File#setReadable/setWritable/setExecutable do not change access rights on 
> Windows making APIs inconsistent with the Unix behavior
> - File#length does not work as expected on symlinks on Windows
> - File#renameTo does not work as expected on symlinks on Windows
> All above resulted in Hadoop community having to fill in the gaps by 
> providing equivalent native implementations or applying workarounds. 
> JDK7 addressed (as far as I know) all (or most) of the above problems, either 
> thru the newly introduced 
> [Files|http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html] 
> class or thru bug fixes.
> This is a tracking Jira to revisit above mediations once JDK7 becomes the 
> supported platform by the Hadoop community. This work would allow significant 
> portion of the native platform-dependent code to be replaced with Java 
> equivalents what is goodness w.r.t. Hadoop cross-platform support. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-05-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13663970#comment-13663970
 ] 

Rémy SAISSY commented on HADOOP-9438:
-

I have just created the jira issue: 
https://issues.apache.org/jira/browse/MAPREDUCE-5264
I will work on it soon.


> LocalFileContext does not throw an exception on mkdir for already existing 
> directory
> 
>
> Key: HADOOP-9438
> URL: https://issues.apache.org/jira/browse/HADOOP-9438
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: HADOOP-9438.20130501.1.patch, 
> HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch
>
>
> according to 
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
> should throw a FileAlreadyExistsException if the directory already exists.
> I tested this and 
> {code}
> FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
> Path p = new Path("/tmp/bobby.12345");
> FsPermission cachePerms = new FsPermission((short) 0755);
> lfc.mkdir(p, cachePerms, false);
> lfc.mkdir(p, cachePerms, false);
> {code}
> never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira