[jira] [Updated] (HADOOP-8562) Enhancements to Hadoop for Windows Server and Windows Azure development and runtime environments

2013-03-04 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8562:


Status: Patch Available  (was: Open)

Submitting latest merge patch to Jenkins precommit build.

> Enhancements to Hadoop for Windows Server and Windows Azure development and 
> runtime environments
> 
>
> Key: HADOOP-8562
> URL: https://issues.apache.org/jira/browse/HADOOP-8562
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Bikas Saha
>Assignee: Bikas Saha
> Attachments: branch-trunk-win.min-notest.patch, 
> branch-trunk-win-min.patch, branch-trunk-win.min.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> test-untar.tar, test-untar.tgz
>
>
> This JIRA tracks the work that needs to be done on trunk to enable Hadoop to 
> run on Windows Server and Azure environments. This incorporates porting 
> relevant work from the similar effort on branch 1 tracked via HADOOP-8079.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8562) Enhancements to Hadoop for Windows Server and Windows Azure development and runtime environments

2013-03-04 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8562:


Attachment: branch-trunk-win.patch

> Enhancements to Hadoop for Windows Server and Windows Azure development and 
> runtime environments
> 
>
> Key: HADOOP-8562
> URL: https://issues.apache.org/jira/browse/HADOOP-8562
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Bikas Saha
>Assignee: Bikas Saha
> Attachments: branch-trunk-win.min-notest.patch, 
> branch-trunk-win-min.patch, branch-trunk-win.min.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> test-untar.tar, test-untar.tgz
>
>
> This JIRA tracks the work that needs to be done on trunk to enable Hadoop to 
> run on Windows Server and Azure environments. This incorporates porting 
> relevant work from the similar effort on branch 1 tracked via HADOOP-8079.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address

2013-03-04 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-9099:
---

Attachment: HADOOP-9099.trunk.patch

Run into the same failure in trunk (branch-trunk-win). Attaching the trunk 
compatible patch. 

Nicholas, can we please commit this to trunk as well?

> NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an 
> IP address
> ---
>
> Key: HADOOP-9099
> URL: https://issues.apache.org/jira/browse/HADOOP-9099
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
>Priority: Minor
> Fix For: 1.2.0, 1-win
>
> Attachments: HADOOP-9099.branch-1-win.patch, HADOOP-9099.trunk.patch
>
>
> I just hit this failure. We should use some more unique string for 
> "UnknownHost":
> Testcase: testNormalizeHostName took 0.007 sec
>   FAILED
> expected:<[65.53.5.181]> but was:<[UnknownHost]>
> junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but 
> was:<[UnknownHost]>
>   at 
> org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347)
> Will post a patch in a bit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9250) Windows installer bugfixes

2013-03-04 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13593075#comment-13593075
 ] 

Ivan Mitic commented on HADOOP-9250:


Thanks Suresh!

> Windows installer bugfixes
> --
>
> Key: HADOOP-9250
> URL: https://issues.apache.org/jira/browse/HADOOP-9250
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Fix For: 1-win
>
> Attachments: HADOOP-9250.branch-1-win.installerbugs.patch
>
>
> A few bugfixes and improvements we made to the install scripts on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9250) Windows installer bugfixes

2013-03-04 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9250.
-

   Resolution: Fixed
Fix Version/s: 1-win
 Hadoop Flags: Reviewed

+1. I committed the patch to branch-1-win.

Thank you Ivan! Thanks to [~kanna...@microsoft.com] for the review.

> Windows installer bugfixes
> --
>
> Key: HADOOP-9250
> URL: https://issues.apache.org/jira/browse/HADOOP-9250
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Fix For: 1-win
>
> Attachments: HADOOP-9250.branch-1-win.installerbugs.patch
>
>
> A few bugfixes and improvements we made to the install scripts on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13593055#comment-13593055
 ] 

Chris Nauroth commented on HADOOP-8973:
---

Thanks for the comments, everyone.  This is very helpful.

{quote}
Does this change the current directory of the calling process?
{quote}

No, this forks a whole new process, and runs the cd within that process.  The 
working directory of the calling process is unchanged.  I think this is safe.

{quote}
Leveriging winutils in CheckDisk would provide a nice symmetry in the test.
{quote}

{quote}
Chris, I think it should be relatively easy to provide some API like this 
either through winutils or JNI.
{quote}

On all of the other discussion points, I think the summary is that we have 
discovered that there are deficiencies in the current logic of {{DiskChecker}}, 
and it's not a problem specific to Windows.  It's a problem on Linux too.  
Considering this, I'd still like to proceed with the basic approach in the 
current patch.  We can file a follow-up jira to fix the problem more 
completely, with full consideration for other permission models that include 
things like POSIX ACLs and NTFS ACLs.  (My opinion is that we should just wait 
for JDK7 instead of investing in JNI calls, but that's just my opinon.)  The 
scope of this follow-up jira would include both Linux and Windows.

Arpit had provided some feedback on the actual code, and I do want to provide a 
new patch to address that feedback.  I'm planning on uploading a new patch 
tomorrow.  If anyone disagrees with the approach though, please let me know so 
that I don't waste time preparing a patch that is objectionable.  :-)

{quote}
Well. In that case, it would be a standard pattern everywhere because 
everywhere code simply checks the value of the permissions and not whether the 
process checking that value actually has the right membership wrt that value. 
Isnt it so? Irrespective of OS.
{quote}

I'm reluctant to change the code so that the permission checks are less 
comprehensive on Linux for the sake of cross-platform consistency.  Right now, 
we have one overload of {{DiskChecker#checkDir}} that is correct AFAIK, and 
another overload of {{DiskChecker#checkDir}} that is incomplete when 
considering more sophisticated permission models on the local file system, like 
POSIX ACLs.  The approach in the current patch at least achieves consistent 
behavior between Linux and Windows, so at least we have symmetry with regards 
to that.


> DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
> ACLs
> -
>
> Key: HADOOP-8973
> URL: https://issues.apache.org/jira/browse/HADOOP-8973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8973-branch-trunk-win.patch
>
>
> DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
> check if a directory is inaccessible.  These APIs are not reliable on Windows 
> with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592988#comment-13592988
 ] 

Bikas Saha commented on HADOOP-8973:


Thats is a good idea to do if we need to find out if the current process user 
has certain permissions. But I guess the point we are debating on currently is 
whether we can make checkDisk(File) do what checkDisk(FS, Path, Perm) does. 
Perhaps by simply calling the second function. If checkDisk(FS, Path, Perm) 
meets our other needs then it should be enough. I think currently, the code 
checks for expected permissions, implicitly assuming the daemon processes to be 
running as the users who are supposed to have those permissions.

> DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
> ACLs
> -
>
> Key: HADOOP-8973
> URL: https://issues.apache.org/jira/browse/HADOOP-8973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8973-branch-trunk-win.patch
>
>
> DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
> check if a directory is inaccessible.  These APIs are not reliable on Windows 
> with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8462) Native-code implementation of bzip2 codec

2013-03-04 Thread Govind Kamat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Govind Kamat updated HADOOP-8462:
-

Attachment: HADOOP-8462-trunk.patch
HADOOP-8462-2.0.2a.patch

> Native-code implementation of bzip2 codec
> -
>
> Key: HADOOP-8462
> URL: https://issues.apache.org/jira/browse/HADOOP-8462
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.23.1
>Reporter: Govind Kamat
>Assignee: Govind Kamat
> Attachments: HADOOP-8462-2.0.2a.1.patch, HADOOP-8462-2.0.2a.patch, 
> HADOOP-8462-2.0.2a.patch, HADOOP-8462-trunk.1.patch, 
> HADOOP-8462-trunk.1.patch, HADOOP-8462-trunk.patch, HADOOP-8462-trunk.patch, 
> HADOOP-8462-trunk.patch, HADOOP-8462-trunk.patch
>
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> The bzip2 codec supplied with Hadoop is currently available only as a Java 
> implementation.  A version that uses the system bzip2 library can provide 
> improved performance and a better memory footprint.  This will also make it 
> feasible to utilize alternative bzip2 libraries that may perform better for 
> specific jobs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9338) FsShell Copy Commands Should Optionally Preserve File Attributes

2013-03-04 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592967#comment-13592967
 ] 

Aaron T. Myers commented on HADOOP-9338:


Hi Nick, the patch looks pretty good to me. I just have three comments:

# I recommend adding a JavaDoc comment to RawLocalFileSystem#setTimes saying 
explicitly that access time is not set by that method.
# In the comment for CommandWithDestination#setPreserve, I'd recommend 
explicitly saying that the only attributes which the option will attempt to 
preserver are modtime and atime.
# Similarly to #2 above, in the command usage text I recommend making it clear 
which attributes will be preserved. This seems particularly important since the 
implementation of this '-p' is not quite the same as the usual `cp -p' since 
the latter will also attempt to preserve file ownership and mode. For that 
matter, any reason we shouldn't make this option in the Hadoop shell attempt to 
preserve all of these attributes as well?

> FsShell Copy Commands Should Optionally Preserve File Attributes
> 
>
> Key: HADOOP-9338
> URL: https://issues.apache.org/jira/browse/HADOOP-9338
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 0.20.2, 2.0.3-alpha
>Reporter: Nick White
>Assignee: Nick White
> Attachments: HADOOP-9338.0.patch, HADOOP-9338.1.patch, 
> HADOOP-9338.2.patch
>
>
> The attached patch adds a -p flag to the copyFromLocal and copyToLocal 
> FsShell commands that behaves (as far as possible) like the unix 'cp' 
> command's -p flag (i.e. preserves file last access and last modification 
> times).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-03-04 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9232.
-

   Resolution: Fixed
Fix Version/s: trunk-win
 Hadoop Flags: Reviewed

Committed the patch to the branch.

Thank you Ivan!. Thank you Arpit, Chuan and Chris for the review.

> JniBasedUnixGroupsMappingWithFallback fails on Windows with 
> UnsatisfiedLinkError
> 
>
> Key: HADOOP-9232
> URL: https://issues.apache.org/jira/browse/HADOOP-9232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native, security
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Ivan Mitic
> Fix For: trunk-win
>
> Attachments: HADOOP-9232.branch-trunk-win.jnigroups.2.patch, 
> HADOOP-9232.branch-trunk-win.jnigroups.3.patch, 
> HADOOP-9232.branch-trunk-win.jnigroups.patch, HADOOP-9232.patch
>
>
> {{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
> properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic 
> in {{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native 
> code is loaded during startup.  In this case, hadoop.dll is present and 
> loaded, but it doesn't contain the right code.  There will be no attempt to 
> fallback to {{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-03-04 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9232:


Attachment: HADOOP-9232.patch

+1 for the patch. 

Minor update to the patch that did not apply cleanly to latest branch-trunk-win.


> JniBasedUnixGroupsMappingWithFallback fails on Windows with 
> UnsatisfiedLinkError
> 
>
> Key: HADOOP-9232
> URL: https://issues.apache.org/jira/browse/HADOOP-9232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native, security
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Ivan Mitic
> Attachments: HADOOP-9232.branch-trunk-win.jnigroups.2.patch, 
> HADOOP-9232.branch-trunk-win.jnigroups.3.patch, 
> HADOOP-9232.branch-trunk-win.jnigroups.patch, HADOOP-9232.patch
>
>
> {{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
> properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic 
> in {{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native 
> code is loaded during startup.  In this case, hadoop.dll is present and 
> loaded, but it doesn't contain the right code.  There will be no attempt to 
> fallback to {{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9117) replace protoc ant plugin exec with a maven plugin

2013-03-04 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592932#comment-13592932
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-9117:


@Junping, I only ignored the errors (manually deleted the errors in eclipse) 
but did not solve the problem completely.  The errors will come back if 
rebuilding everything.

@Alejandro, any idea to fix it?

> replace protoc ant plugin exec with a maven plugin
> --
>
> Key: HADOOP-9117
> URL: https://issues.apache.org/jira/browse/HADOOP-9117
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.4-beta
>
> Attachments: HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch, 
> HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch
>
>
> The protoc compiler is currently invoked using ant plugin exec. There is a 
> bug in the ant plugin exec task which does not consume the STDOUT or STDERR 
> appropriately making the build to stop sometimes (you need to press enter to 
> continue).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Chuan Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592931#comment-13592931
 ] 

Chuan Liu commented on HADOOP-8973:
---

I want to chime in a little on this.

> The reason that methods like File#canRead are used here is that it checks if 
> the current process can read the file, with full consideration of the 
> specific user that launched the process.

Chris, I think it should be relatively easy to provide some API like this 
either through winutils or JNI. In Windows (and the new POSIX ACL mode), file 
owner and file ACL are separate concepts (comparing with traditional chmod file 
permission mode where the first 3 bit are tightly associated with the file 
owner). In "winutils ls", we retrieval effective rights for the owner through 
the API call GetEffectiveRightsForSid() where Sid is the user sid that we want 
to check access permissions. Calling this API with a given user SID, we should 
be able to get its permission on the file.

I notice we already have the "getfacl" command line utility on Linux. What do 
you think we provide a simple one in "winutils"? E.g. 
{code}
>winutils getfacl -u [username] file
user:[username]:r-x
{code}

> DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
> ACLs
> -
>
> Key: HADOOP-8973
> URL: https://issues.apache.org/jira/browse/HADOOP-8973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8973-branch-trunk-win.patch
>
>
> DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
> check if a directory is inaccessible.  These APIs are not reliable on Windows 
> with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9359) Add Windows build and unit test to test-patch pre-commit testing

2013-03-04 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-9359:
---

Description: 
The "test-patch" utility is triggered by "Patch Available" state in Jira, and 
runs nine different sets of builds, tests, and static analysis tools.  
Currently only the Linux environment is tested.  Need to add tests for Java 
build under Windows, and unit test execution under Windows.

At this time, the community has decided that "-1" on these new additional tests 
shall not block commits to the code base.  However, contributors and code 
reviewers are encouraged to utilize the information provided by these tests to 
help keep Hadoop cross-platform compatible.  Modify 
http://wiki.apache.org/hadoop/HowToContribute to document this.

  was:
The "test-patch" utility is triggered by "Patch Available" state in Jira, and 
runs nine different sets of builds, tests, and static analysis tools.  
Currently only the Linux environment is tested.  Need to add tests for Java 
build under Windows, and unit test execution under Windows.

At this time, the community has decided that "-1" on these new additional tests 
shall not block commits to the code base.  However, contributors and code 
reviewers are encouraged to utilize the information provided by these tests to 
help keep Hadoop cross-platform compatible.


> Add Windows build and unit test to test-patch pre-commit testing
> 
>
> Key: HADOOP-9359
> URL: https://issues.apache.org/jira/browse/HADOOP-9359
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Reporter: Matt Foley
>Assignee: Matt Foley
>
> The "test-patch" utility is triggered by "Patch Available" state in Jira, and 
> runs nine different sets of builds, tests, and static analysis tools.  
> Currently only the Linux environment is tested.  Need to add tests for Java 
> build under Windows, and unit test execution under Windows.
> At this time, the community has decided that "-1" on these new additional 
> tests shall not block commits to the code base.  However, contributors and 
> code reviewers are encouraged to utilize the information provided by these 
> tests to help keep Hadoop cross-platform compatible.  Modify 
> http://wiki.apache.org/hadoop/HowToContribute to document this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592918#comment-13592918
 ] 

Ivan Mitic commented on HADOOP-8973:


Thanks Chris for the patch. I am fine with the current patch, but would still 
like to reiterate on Bikas' question on why we cannot use 
impliesRead/impliesWrite etc. At this point, we already assume the POSIX like 
semantic in Hadoop across the board (this is why we have winutils). Leveriging 
winutils in CheckDisk would provide a nice symmetry in the test.

You bring up a good point about not completely covering the NTFS ACLs space. I 
think this is fine for now. Eventually, as we improve, we can think of good 
ways to abstract this out, but this goes way beyond today.

Let me know what you think.



> DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
> ACLs
> -
>
> Key: HADOOP-8973
> URL: https://issues.apache.org/jira/browse/HADOOP-8973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8973-branch-trunk-win.patch
>
>
> DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
> check if a directory is inaccessible.  These APIs are not reliable on Windows 
> with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9359) Add Windows build and unit test to test-patch pre-commit testing

2013-03-04 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9359 started by Matt Foley.

> Add Windows build and unit test to test-patch pre-commit testing
> 
>
> Key: HADOOP-9359
> URL: https://issues.apache.org/jira/browse/HADOOP-9359
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Reporter: Matt Foley
>Assignee: Matt Foley
>
> The "test-patch" utility is triggered by "Patch Available" state in Jira, and 
> runs nine different sets of builds, tests, and static analysis tools.  
> Currently only the Linux environment is tested.  Need to add tests for Java 
> build under Windows, and unit test execution under Windows.
> At this time, the community has decided that "-1" on these new additional 
> tests shall not block commits to the code base.  However, contributors and 
> code reviewers are encouraged to utilize the information provided by these 
> tests to help keep Hadoop cross-platform compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS

2013-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592910#comment-13592910
 ] 

Hudson commented on HADOOP-9337:


Integrated in Hadoop-trunk-Commit #3415 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3415/])
HADOOP-9337. org.apache.hadoop.fs.DF.getMount() does not work on Mac OS. 
Contributed by Ivan A. Veselovsky. (Revision 1452622)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1452622
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DF.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDFVariations.java


> org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
> --
>
> Key: HADOOP-9337
> URL: https://issues.apache.org/jira/browse/HADOOP-9337
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-beta
> Environment: Mac OS 10.8
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Fix For: 2.0.4-beta
>
> Attachments: HADOOP-9337--a.patch, HADOOP-9337--b.patch, 
> HADOOP-9337-branch-0.23--a.patch
>
>
> test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() 
> (added in HADOOP-9067) appears to fail on MacOS because 
> method org.apache.hadoop.fs.DF.getMount() does not work correctly.
> The problem is that "df -k " command returns on MacOS output like the 
> following:
> ---
> Filesystem   1024-blocks      Used Available Capacity  iused    ifree %iused  
> Mounted on
> /dev/disk0s4   194879828 100327120  94552708    52% 25081778 23638177   51%   
> /Volumes/Data
> ---
> while the following is expected:
> ---
> Filesystem 1024-blocks  Used Available Capacity Mounted on
> /dev/mapper/vg_iveselovskyws-lv_home 420545160  15978372 383204308   5% 
> /home
> ---
> So, we see that Mac's output has 3 additional tokens.
> I can suggest 2 ways to fix the problem.
> (a) use "-P" (POSIX) option when invoking df command. This will probably 
> ensure unifirm output on all Unix systems;
> (b) move Mac branch to specific "case" branch and treat it specifically (like 
> we currently have for AIX, DF.java, line 214)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9359) Add Windows build and unit test to test-patch pre-commit testing

2013-03-04 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592911#comment-13592911
 ] 

Matt Foley commented on HADOOP-9359:


In order to keep execution time down and isolate the new functionality, Giri 
and I intend the following design:
A separate powershell script, modeled on the existing test-patch.sh, but 
invoking only java build and unit tests, shall be added.  The new file will be 
called test-patch-win.ps1.  Existing functionality of test-patch.sh will be 
unchanged.

The Jenkins job that invokes test-patch.sh will be modified to also invoke 
test-patch-win.ps1 in parallel, utilizing a Windows Jenkins slave.  This avoids 
slowing down the user-visible speed of the pre-commit test response.  The 
current linux results will be reported in a comment as usual.  The new windows 
results will be reported in a separate comment.

> Add Windows build and unit test to test-patch pre-commit testing
> 
>
> Key: HADOOP-9359
> URL: https://issues.apache.org/jira/browse/HADOOP-9359
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Reporter: Matt Foley
>Assignee: Matt Foley
>
> The "test-patch" utility is triggered by "Patch Available" state in Jira, and 
> runs nine different sets of builds, tests, and static analysis tools.  
> Currently only the Linux environment is tested.  Need to add tests for Java 
> build under Windows, and unit test execution under Windows.
> At this time, the community has decided that "-1" on these new additional 
> tests shall not block commits to the code base.  However, contributors and 
> code reviewers are encouraged to utilize the information provided by these 
> tests to help keep Hadoop cross-platform compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS

2013-03-04 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-9337:
---

  Resolution: Fixed
   Fix Version/s: 2.0.4-beta
Target Version/s: 2.0.4-beta
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks a lot for the contribution, Ivan.

> org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
> --
>
> Key: HADOOP-9337
> URL: https://issues.apache.org/jira/browse/HADOOP-9337
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-beta
> Environment: Mac OS 10.8
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Fix For: 2.0.4-beta
>
> Attachments: HADOOP-9337--a.patch, HADOOP-9337--b.patch, 
> HADOOP-9337-branch-0.23--a.patch
>
>
> test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() 
> (added in HADOOP-9067) appears to fail on MacOS because 
> method org.apache.hadoop.fs.DF.getMount() does not work correctly.
> The problem is that "df -k " command returns on MacOS output like the 
> following:
> ---
> Filesystem   1024-blocks      Used Available Capacity  iused    ifree %iused  
> Mounted on
> /dev/disk0s4   194879828 100327120  94552708    52% 25081778 23638177   51%   
> /Volumes/Data
> ---
> while the following is expected:
> ---
> Filesystem 1024-blocks  Used Available Capacity Mounted on
> /dev/mapper/vg_iveselovskyws-lv_home 420545160  15978372 383204308   5% 
> /home
> ---
> So, we see that Mac's output has 3 additional tokens.
> I can suggest 2 ways to fix the problem.
> (a) use "-P" (POSIX) option when invoking df command. This will probably 
> ensure unifirm output on all Unix systems;
> (b) move Mac branch to specific "case" branch and treat it specifically (like 
> we currently have for AIX, DF.java, line 214)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592904#comment-13592904
 ] 

Arpit Agarwal commented on HADOOP-8973:
---

+1 on the approach for its simplicity. It is sound to check for permissions in 
this way and we should do the same on other platforms. 

A few comments on the code.

Are there any bugs other than the one you link below? I wonder if dir.canRead 
can return false when it should return true. We should skip this call on 
Windows if it is known to be broken. Same for dir.canWrite and dir.canExecute.
{code}
// This method contains several workarounds to known JVM bugs that cause
// File.canRead, File.canWrite, and File.canExecute to return incorrect
// results on Windows with NTFS ACLs.
// http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6203387
{code}


Does this change the current directory of the calling process?
{code}String[] cdCmd = new String[] { "cmd", "/C", "cd",
dir.getAbsolutePath() };
{code}


> DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
> ACLs
> -
>
> Key: HADOOP-8973
> URL: https://issues.apache.org/jira/browse/HADOOP-8973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8973-branch-trunk-win.patch
>
>
> DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
> check if a directory is inaccessible.  These APIs are not reliable on Windows 
> with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9359) Add Windows build and unit test to test-patch pre-commit testing

2013-03-04 Thread Matt Foley (JIRA)
Matt Foley created HADOOP-9359:
--

 Summary: Add Windows build and unit test to test-patch pre-commit 
testing
 Key: HADOOP-9359
 URL: https://issues.apache.org/jira/browse/HADOOP-9359
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Matt Foley
Assignee: Matt Foley


The "test-patch" utility is triggered by "Patch Available" state in Jira, and 
runs nine different sets of builds, tests, and static analysis tools.  
Currently only the Linux environment is tested.  Need to add tests for Java 
build under Windows, and unit test execution under Windows.

At this time, the community has decided that "-1" on these new additional tests 
shall not block commits to the code base.  However, contributors and code 
reviewers are encouraged to utilize the information provided by these tests to 
help keep Hadoop cross-platform compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS

2013-03-04 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592895#comment-13592895
 ] 

Aaron T. Myers commented on HADOOP-9337:


+1, the patch looks good to me. I'm going to commit this momentarily.

> org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
> --
>
> Key: HADOOP-9337
> URL: https://issues.apache.org/jira/browse/HADOOP-9337
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-beta
> Environment: Mac OS 10.8
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9337--a.patch, HADOOP-9337--b.patch, 
> HADOOP-9337-branch-0.23--a.patch
>
>
> test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() 
> (added in HADOOP-9067) appears to fail on MacOS because 
> method org.apache.hadoop.fs.DF.getMount() does not work correctly.
> The problem is that "df -k " command returns on MacOS output like the 
> following:
> ---
> Filesystem   1024-blocks      Used Available Capacity  iused    ifree %iused  
> Mounted on
> /dev/disk0s4   194879828 100327120  94552708    52% 25081778 23638177   51%   
> /Volumes/Data
> ---
> while the following is expected:
> ---
> Filesystem 1024-blocks  Used Available Capacity Mounted on
> /dev/mapper/vg_iveselovskyws-lv_home 420545160  15978372 383204308   5% 
> /home
> ---
> So, we see that Mac's output has 3 additional tokens.
> I can suggest 2 ways to fix the problem.
> (a) use "-P" (POSIX) option when invoking df command. This will probably 
> ensure unifirm output on all Unix systems;
> (b) move Mac branch to specific "case" branch and treat it specifically (like 
> we currently have for AIX, DF.java, line 214)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592877#comment-13592877
 ] 

Bikas Saha commented on HADOOP-8973:


Well. In that case, it would be a standard pattern everywhere because 
everywhere code simply checks the value of the permissions and not whether the 
process checking that value actually has the right membership wrt that value. 
Isnt it so? Irrespective of OS.

Also, one can make a case that checkDir(File dir) should end up calling 
checkDir(FileSystem, FilePath, "rwx") instead of duplicating the logic.


> DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
> ACLs
> -
>
> Key: HADOOP-8973
> URL: https://issues.apache.org/jira/browse/HADOOP-8973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8973-branch-trunk-win.patch
>
>
> DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
> check if a directory is inaccessible.  These APIs are not reliable on Windows 
> with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592864#comment-13592864
 ] 

Hadoop QA commented on HADOOP-9357:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12571995/hadoop-9357-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 tests included appear to have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2259//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2259//console

This message is automatically generated.

> Fallback to default authority if not specified in FileContext
> -
>
> Key: HADOOP-9357
> URL: https://issues.apache.org/jira/browse/HADOOP-9357
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hadoop-9357-1.patch, hadoop-9357-2.patch
>
>
> Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
> parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
> "hdfs:///tmp", FileContext will error while FileSystem will add the authority 
> of the default FS (e.g. turn it into "hdfs://defaultNN:port/tmp"). 
> This is technically correct, but FileSystem's behavior is nicer for users and 
> okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
> {noformat}
> For backwards
> compatibility, an implementation may work around such references
> by removing the scheme if it matches that of the base URI and the
> scheme is known to always use the  syntax.  The parser
> can then continue with the steps below for the remainder of the
> reference components.  Validating parsers should mark such a
> misformed relative reference as an error.
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9356) remove remaining references to cygwin/cygpath from scripts

2013-03-04 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9356.
-

   Resolution: Fixed
Fix Version/s: trunk-win
 Hadoop Flags: Reviewed

I committed the patch to branch-trunk-win.

Thank you Chris!

> remove remaining references to cygwin/cygpath from scripts
> --
>
> Key: HADOOP-9356
> URL: https://issues.apache.org/jira/browse/HADOOP-9356
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: trunk-win
>
> Attachments: HADOOP-9356-branch-trunk-win.1.patch, 
> HADOOP-9356-branch-trunk-win.2.patch
>
>
> branch-trunk-win still contains a few references to Cygwin and the cygpath 
> command that need to be removed now that they are no longer needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9357:


Attachment: hadoop-9357-2.patch

Add test timeout.

> Fallback to default authority if not specified in FileContext
> -
>
> Key: HADOOP-9357
> URL: https://issues.apache.org/jira/browse/HADOOP-9357
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hadoop-9357-1.patch, hadoop-9357-2.patch
>
>
> Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
> parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
> "hdfs:///tmp", FileContext will error while FileSystem will add the authority 
> of the default FS (e.g. turn it into "hdfs://defaultNN:port/tmp"). 
> This is technically correct, but FileSystem's behavior is nicer for users and 
> okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
> {noformat}
> For backwards
> compatibility, an implementation may work around such references
> by removing the scheme if it matches that of the base URI and the
> scheme is known to always use the  syntax.  The parser
> can then continue with the steps below for the remainder of the
> reference components.  Validating parsers should mark such a
> misformed relative reference as an error.
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592830#comment-13592830
 ] 

Chris Nauroth commented on HADOOP-8973:
---

That's interesting.  Yes, I believe this is a bug in the existing code for the 
other overload of {{DiskChecker#checkDir}}.

For example, suppose a dfs.datanode.data.dir on the local file system with 
owner "foo" and perms set to 700.  Now suppose we launch datanode as user 
"bar".  {{DiskChecker#checkDir}} will just look for 700 and not consider the 
running user, so it will think that the directory is usable.  Then, it would 
experience an I/O error later whenever the process first tries to use that 
directory.

I'll file a separate jira for this.


> DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
> ACLs
> -
>
> Key: HADOOP-8973
> URL: https://issues.apache.org/jira/browse/HADOOP-8973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8973-branch-trunk-win.patch
>
>
> DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
> check if a directory is inaccessible.  These APIs are not reliable on Windows 
> with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9356) remove remaining references to cygwin/cygpath from scripts

2013-03-04 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592827#comment-13592827
 ] 

Suresh Srinivas commented on HADOOP-9356:
-

+1 for the patch.

> remove remaining references to cygwin/cygpath from scripts
> --
>
> Key: HADOOP-9356
> URL: https://issues.apache.org/jira/browse/HADOOP-9356
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-9356-branch-trunk-win.1.patch, 
> HADOOP-9356-branch-trunk-win.2.patch
>
>
> branch-trunk-win still contains a few references to Cygwin and the cygpath 
> command that need to be removed now that they are no longer needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592795#comment-13592795
 ] 

Bikas Saha commented on HADOOP-8973:


During Datanode startup it checks if the data dir permissions are 755. Using 
the above logic, is there a hole in that check too because it does not imply 
that Datanode has permission to read and write to that directory? DiskChecker 
is used to perform that check too using another checkDir() method.

> DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
> ACLs
> -
>
> Key: HADOOP-8973
> URL: https://issues.apache.org/jira/browse/HADOOP-8973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8973-branch-trunk-win.patch
>
>
> DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
> check if a directory is inaccessible.  These APIs are not reliable on Windows 
> with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9356) remove remaining references to cygwin/cygpath from scripts

2013-03-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9356:
--

Attachment: HADOOP-9356-branch-trunk-win.2.patch

Thanks, Suresh.  There were a few pieces of script that I missed.  I also 
didn't realize some of the documentation mentioned Cygwin.  I'm uploading a new 
patch to fix this.

Grep will still show a few occurrences of "cygwin", but I think these should be 
left in place, at least for now:

# BUILDING.txt documents the small number of remaining Unix utilities still 
required for building.
# CHANGES.txt and releasenotes.html mention old jiras pertaining to Cygwin 
support, so we wouldn't retroactively edit the change log.
# NativeLibraries.apt.vm mentions that libhadoop.so doesn't work on Cygwin.
# TestLocalDirAllocator and TestDiskError have comments stating that some tests 
don't run on Windows for platform-specific reasons.  Since this involves actual 
code, I recommend that we file a separate jira to determine if we can re-enable 
these tests, and if not, then change the comment from "Cygwin" to "Windows".
# {{UtilTest#isCygwin}} appears to be an unused method.  Again, since this is 
actual code, I recommend that we file a separate jira to track removing the 
method.


> remove remaining references to cygwin/cygpath from scripts
> --
>
> Key: HADOOP-9356
> URL: https://issues.apache.org/jira/browse/HADOOP-9356
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-9356-branch-trunk-win.1.patch, 
> HADOOP-9356-branch-trunk-win.2.patch
>
>
> branch-trunk-win still contains a few references to Cygwin and the cygpath 
> command that need to be removed now that they are no longer needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592791#comment-13592791
 ] 

Hadoop QA commented on HADOOP-9357:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12571972/hadoop-9357-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

  {color:red}-1 one of tests included doesn't have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2258//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2258//console

This message is automatically generated.

> Fallback to default authority if not specified in FileContext
> -
>
> Key: HADOOP-9357
> URL: https://issues.apache.org/jira/browse/HADOOP-9357
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hadoop-9357-1.patch
>
>
> Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
> parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
> "hdfs:///tmp", FileContext will error while FileSystem will add the authority 
> of the default FS (e.g. turn it into "hdfs://defaultNN:port/tmp"). 
> This is technically correct, but FileSystem's behavior is nicer for users and 
> okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
> {noformat}
> For backwards
> compatibility, an implementation may work around such references
> by removing the scheme if it matches that of the base URI and the
> scheme is known to always use the  syntax.  The parser
> can then continue with the steps below for the remainder of the
> reference components.  Validating parsers should mark such a
> misformed relative reference as an error.
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9358) "Auth failed" log should include exception string

2013-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592786#comment-13592786
 ] 

Hadoop QA commented on HADOOP-9358:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12571973/hadoop-9385.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2257//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2257//console

This message is automatically generated.

> "Auth failed" log should include exception string
> -
>
> Key: HADOOP-9358
> URL: https://issues.apache.org/jira/browse/HADOOP-9358
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 3.0.0, 2.0.4-beta
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-9385.txt
>
>
> Currently, when authentication fails, we see a WARN message like:
> {code}
> 2013-02-28 22:49:03,152 WARN  ipc.Server 
> (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null
> {code}
> This is not useful to understand the underlying cause. The WARN entry should 
> additionally include the exception text, eg:
> {code}
> 2013-02-28 22:49:03,152 WARN  ipc.Server 
> (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null 
> (GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API 
> level (Mechanism level: Request is a replay (34))])
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9163) The rpc msg in ProtobufRpcEngine.proto should be moved out to avoid an extra copy

2013-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592778#comment-13592778
 ] 

Hudson commented on HADOOP-9163:


Integrated in Hadoop-trunk-Commit #3414 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3414/])
HADOOP-9163 The rpc msg in ProtobufRpcEngine.proto should be moved out to 
avoid an extra copy (Sanjay Radia) (Revision 1452581)

 Result = SUCCESS
sradia : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1452581
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/ProtobufRpcEngine.proto


> The rpc msg in  ProtobufRpcEngine.proto should be moved out to avoid an extra 
> copy
> --
>
> Key: HADOOP-9163
> URL: https://issues.apache.org/jira/browse/HADOOP-9163
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
> Attachments: Hadoop-9163-2.patch, Hadoop-9163-3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9163) The rpc msg in ProtobufRpcEngine.proto should be moved out to avoid an extra copy

2013-03-04 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-9163:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> The rpc msg in  ProtobufRpcEngine.proto should be moved out to avoid an extra 
> copy
> --
>
> Key: HADOOP-9163
> URL: https://issues.apache.org/jira/browse/HADOOP-9163
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
> Attachments: Hadoop-9163-2.patch, Hadoop-9163-3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9356) remove remaining references to cygwin/cygpath from scripts

2013-03-04 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592771#comment-13592771
 ] 

Suresh Srinivas commented on HADOOP-9356:
-

Chris, I see some more references to cygwin in the code base. Should it be 
removed as well?

{noformat}
./BUILDING.txt:* Unix command-line tools from GnuWin32 or Cygwin: sh, mkdir, 
rm, cp, tar, gzip
./hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html: 
compilation of protobuf files fails in windows/cygwin
./hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/site.xml:
http://www.cygwin.com/"; />
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DF.java:
 * space utilization. Tested on Linux, FreeBSD, Cygwin. */
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java:
   * Supports Unix, Cygwin, WindXP.
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HardLink.java:
   * a Cygwin shell command, and depends on ${cygwin}/bin
./hadoop-common-project/hadoop-common/src/main/java/overview.html:http://www.cygwin.com/";>Cygwin - Required for shell support in 
./hadoop-common-project/hadoop-common/src/main/java/overview.html:installed 
cygwin, start the cygwin installer and select the packages:
./hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm:   
library does not to work with Cygwin or the Mac OS X platform.
./hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm:
[[1]] Cygwin - Required for shell support in addition to the required
./hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm:   
installed cygwin, start the cygwin installer and select the packages:
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalDirAllocator.java:
 * This test does not run on Cygwin because under Cygwin
./hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/site.xml:
http://www.cygwin.com/"; />
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/overview.html:http://www.cygwin.com/";>Cygwin - Required for shell support in 
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/overview.html:installed cygwin, 
start the cygwin installer and select the packages:
./hadoop-hdfs-project/hadoop-hdfs/src/main/native/tests/test-libhdfs.sh:
cygwin* | mingw* | pw23* )
./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java:
   * not the case on Windows (at least under Cygwin), and possibly AIX.
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestCounters.java:
  public void testLegacyGetGroupNames() {
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/testshell/ExternalMapReduce.java:
  // cygwin since it is a symlink
./hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/UtilTest.java:
  public static boolean isCygwin() {
./hadoop-yarn-project/hadoop-yarn/bin/yarn:cygwin=false
./hadoop-yarn-project/hadoop-yarn/bin/yarn:CYGWIN*) cygwin=true;;
{noformat}

> remove remaining references to cygwin/cygpath from scripts
> --
>
> Key: HADOOP-9356
> URL: https://issues.apache.org/jira/browse/HADOOP-9356
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-9356-branch-trunk-win.1.patch
>
>
> branch-trunk-win still contains a few references to Cygwin and the cygpath 
> command that need to be removed now that they are no longer needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9358) "Auth failed" log should include exception string

2013-03-04 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592761#comment-13592761
 ] 

Aaron T. Myers commented on HADOOP-9358:


+1 pending Jenkins.

> "Auth failed" log should include exception string
> -
>
> Key: HADOOP-9358
> URL: https://issues.apache.org/jira/browse/HADOOP-9358
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 3.0.0, 2.0.4-beta
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-9385.txt
>
>
> Currently, when authentication fails, we see a WARN message like:
> {code}
> 2013-02-28 22:49:03,152 WARN  ipc.Server 
> (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null
> {code}
> This is not useful to understand the underlying cause. The WARN entry should 
> additionally include the exception text, eg:
> {code}
> 2013-02-28 22:49:03,152 WARN  ipc.Server 
> (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null 
> (GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API 
> level (Mechanism level: Request is a replay (34))])
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9358) "Auth failed" log should include exception string

2013-03-04 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9358:


Target Version/s: 3.0.0, 2.0.4-beta
  Status: Patch Available  (was: Open)

> "Auth failed" log should include exception string
> -
>
> Key: HADOOP-9358
> URL: https://issues.apache.org/jira/browse/HADOOP-9358
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 3.0.0, 2.0.4-beta
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-9385.txt
>
>
> Currently, when authentication fails, we see a WARN message like:
> {code}
> 2013-02-28 22:49:03,152 WARN  ipc.Server 
> (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null
> {code}
> This is not useful to understand the underlying cause. The WARN entry should 
> additionally include the exception text, eg:
> {code}
> 2013-02-28 22:49:03,152 WARN  ipc.Server 
> (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null 
> (GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API 
> level (Mechanism level: Request is a replay (34))])
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9357:


Attachment: hadoop-9357-1.patch

Here's a patch that implements the recommendation in the RFC.

If you want to actually see the fail-before-pass-after behavior of the new 
test, run {{mvn test 
-Dtest=TestHDFSFileContextMainOperations#testPathSchemeNoAuthority}}. The new 
code only kicks in when the defaultFS has a non-null authority (thus you need 
{{Hdfs}} and not {{RawLocalFs}}).

> Fallback to default authority if not specified in FileContext
> -
>
> Key: HADOOP-9357
> URL: https://issues.apache.org/jira/browse/HADOOP-9357
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hadoop-9357-1.patch
>
>
> Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
> parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
> "hdfs:///tmp", FileContext will error while FileSystem will add the authority 
> of the default FS (e.g. turn it into "hdfs://defaultNN:port/tmp"). 
> This is technically correct, but FileSystem's behavior is nicer for users and 
> okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
> {noformat}
> For backwards
> compatibility, an implementation may work around such references
> by removing the scheme if it matches that of the base URI and the
> scheme is known to always use the  syntax.  The parser
> can then continue with the steps below for the remainder of the
> reference components.  Validating parsers should mark such a
> misformed relative reference as an error.
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9357:


Fix Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> Fallback to default authority if not specified in FileContext
> -
>
> Key: HADOOP-9357
> URL: https://issues.apache.org/jira/browse/HADOOP-9357
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hadoop-9357-1.patch
>
>
> Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
> parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
> "hdfs:///tmp", FileContext will error while FileSystem will add the authority 
> of the default FS (e.g. turn it into "hdfs://defaultNN:port/tmp"). 
> This is technically correct, but FileSystem's behavior is nicer for users and 
> okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
> {noformat}
> For backwards
> compatibility, an implementation may work around such references
> by removing the scheme if it matches that of the base URI and the
> scheme is known to always use the  syntax.  The parser
> can then continue with the steps below for the remainder of the
> reference components.  Validating parsers should mark such a
> misformed relative reference as an error.
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9358) "Auth failed" log should include exception string

2013-03-04 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9358:


Attachment: hadoop-9385.txt

> "Auth failed" log should include exception string
> -
>
> Key: HADOOP-9358
> URL: https://issues.apache.org/jira/browse/HADOOP-9358
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 3.0.0, 2.0.4-beta
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-9385.txt
>
>
> Currently, when authentication fails, we see a WARN message like:
> {code}
> 2013-02-28 22:49:03,152 WARN  ipc.Server 
> (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null
> {code}
> This is not useful to understand the underlying cause. The WARN entry should 
> additionally include the exception text, eg:
> {code}
> 2013-02-28 22:49:03,152 WARN  ipc.Server 
> (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null 
> (GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API 
> level (Mechanism level: Request is a replay (34))])
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9358) "Auth failed" log should include exception string

2013-03-04 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-9358:
---

 Summary: "Auth failed" log should include exception string
 Key: HADOOP-9358
 URL: https://issues.apache.org/jira/browse/HADOOP-9358
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.0.4-beta
Reporter: Todd Lipcon
Assignee: Todd Lipcon


Currently, when authentication fails, we see a WARN message like:
{code}
2013-02-28 22:49:03,152 WARN  ipc.Server (Server.java:saslReadAndProcess(1056)) 
- Auth failed for 1.2.3.4:12345:null
{code}
This is not useful to understand the underlying cause. The WARN entry should 
additionally include the exception text, eg:
{code}
2013-02-28 22:49:03,152 WARN  ipc.Server (Server.java:saslReadAndProcess(1056)) 
- Auth failed for 1.2.3.4:12345:null (GSS initiate failed [Caused by 
GSSException: Failure unspecified at GSS-API level (Mechanism level: Request is 
a replay (34))])
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-03-04 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592738#comment-13592738
 ] 

Chris Nauroth commented on HADOOP-9232:
---

Quick recap: we have +1 from 2 contributors, and there are no blocking issues 
remaining on this patch.  [~sureshms], we are ready for review from a 
committer.  Thank you.


> JniBasedUnixGroupsMappingWithFallback fails on Windows with 
> UnsatisfiedLinkError
> 
>
> Key: HADOOP-9232
> URL: https://issues.apache.org/jira/browse/HADOOP-9232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native, security
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Ivan Mitic
> Attachments: HADOOP-9232.branch-trunk-win.jnigroups.2.patch, 
> HADOOP-9232.branch-trunk-win.jnigroups.3.patch, 
> HADOOP-9232.branch-trunk-win.jnigroups.patch
>
>
> {{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
> properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic 
> in {{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native 
> code is loaded during startup.  In this case, hadoop.dll is present and 
> loaded, but it doesn't contain the right code.  There will be no attempt to 
> fallback to {{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592734#comment-13592734
 ] 

Chris Nauroth commented on HADOOP-8973:
---

Here was the last discussion on this issue:

{quote}
Can we try the following approach that has the advantage of being common across 
platforms.
Get FileStatus of the file/dir using FileSystem API
getPermission() from FileStatus
There are a bunch of impliesRead(), impliesWrite() functions that tell whether 
a given FsPermission object allows read/write etc. I am sorry I don't remember 
where these functions are.
Using these functions one can get the equivalent of isReadable/isWritable
{quote}

Digging into this, unfortunately no, we cannot reconstruct the logic of 
{{DiskChecker}} in a cross-platform way by using {{FsPermission}} and 
{{FsAction}} (with delegation to winutils ls when running on Windows).  The 
reason that methods like {{File#canRead}} are used here is that it checks if 
the current process can read the file, with full consideration of the specific 
user that launched the process.  Our {{FsPermission}} model considers 
permissions, but it does not consider how those permissions map to specific 
users (nor would it be trivial to expand it to include this information).

For example, using {{FsPermission}} + winutils ls, we could see that a 
particular directory has permissions 770, so the owner and members of the 
primary group have full access to this directory.  If the user that launched 
the process is either the owner or a member of the primary group, then 
{{DiskChecker}} must report that the disk is usable.  If not, then 
{{DiskChecker}} must report that the disk is unusable.  However, 
{{DiskChecker}} doesn't have enough information to choose the correct result.

It would be error-prone to go down the path of trying to duplicate the exact 
permission checking logic enforced by the OS on the local file system.  You 
might argue that we could solve the example above by trying to combine more 
calls to get the primary group and check if the current user's groups include 
that group.  However, that wouldn't be sufficient to cover more complex rules 
enforced by local file systems, such as POSIX ACLs.  It would be challenging to 
get this exactly right, and since the logic is different on different OSes, it 
wouldn't satisfy the goal of trying to keep this layer of code 
platform-agnostic anyway.

I propose that we accept the patch I wrote earlier, still attached to the jira, 
so that {{DiskChecker}} actually attempts a few file system operations for 
verification when running on Windows.  These work-arounds could be removed 
whenever Hadoop upgrades to JDK7.  Then, we can rewrite {{DiskChecker}} to use 
the new Java {{FileSystem}} API, which doesn't have these bugs.  I've confirmed 
that the existing patch still applies cleanly to branch-trunk-win and fixes 
{{TestDiskChecker}} so that it passes on both Mac and Windows.

> DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
> ACLs
> -
>
> Key: HADOOP-8973
> URL: https://issues.apache.org/jira/browse/HADOOP-8973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8973-branch-trunk-win.patch
>
>
> DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
> check if a directory is inaccessible.  These APIs are not reliable on Windows 
> with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9356) remove remaining references to cygwin/cygpath from scripts

2013-03-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9356:
--

Attachment: HADOOP-9356-branch-trunk-win.1.patch

This patch removes the remaining references to cygwin/cygpath from the shell 
scripts and pom.xml files.  After applying this change, I retested fully 
building and running a distro on both Windows and Ubuntu with native.

> remove remaining references to cygwin/cygpath from scripts
> --
>
> Key: HADOOP-9356
> URL: https://issues.apache.org/jira/browse/HADOOP-9356
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-9356-branch-trunk-win.1.patch
>
>
> branch-trunk-win still contains a few references to Cygwin and the cygpath 
> command that need to be removed now that they are no longer needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang moved HDFS-4547 to HADOOP-9357:
---

Key: HADOOP-9357  (was: HDFS-4547)
Project: Hadoop Common  (was: Hadoop HDFS)

> Fallback to default authority if not specified in FileContext
> -
>
> Key: HADOOP-9357
> URL: https://issues.apache.org/jira/browse/HADOOP-9357
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
>
> Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
> parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
> "hdfs:///tmp", FileContext will error while FileSystem will add the authority 
> of the default FS (e.g. turn it into "hdfs://defaultNN:port/tmp"). 
> This is technically correct, but FileSystem's behavior is nicer for users and 
> okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
> {noformat}
> For backwards
> compatibility, an implementation may work around such references
> by removing the scheme if it matches that of the base URI and the
> scheme is known to always use the  syntax.  The parser
> can then continue with the steps below for the remainder of the
> reference components.  Validating parsers should mark such a
> misformed relative reference as an error.
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8562) Enhancements to Hadoop for Windows Server and Windows Azure development and runtime environments

2013-03-04 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592600#comment-13592600
 ] 

Konstantin Shvachko commented on HADOOP-8562:
-

Makes sense guys, thanks.

> Enhancements to Hadoop for Windows Server and Windows Azure development and 
> runtime environments
> 
>
> Key: HADOOP-8562
> URL: https://issues.apache.org/jira/browse/HADOOP-8562
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Bikas Saha
>Assignee: Bikas Saha
> Attachments: branch-trunk-win.min-notest.patch, 
> branch-trunk-win-min.patch, branch-trunk-win.min.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, test-untar.tar, test-untar.tgz
>
>
> This JIRA tracks the work that needs to be done on trunk to enable Hadoop to 
> run on Windows Server and Azure environments. This incorporates porting 
> relevant work from the similar effort on branch 1 tracked via HADOOP-8079.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9355) Abstract symlink tests to use either FileContext or FileSystem

2013-03-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9355:


Attachment: hadoop-9355-wip.patch

WIP patch. Added a new interface that is implemented by either FileContext or 
FileSystem. Copy-pasted refactored versions of the symlink tests to use this 
interface instead. The FileSystem tests just fail right now because symlinks 
aren't implemented yet, but this gives you a flavor for how it'll look.

> Abstract symlink tests to use either FileContext or FileSystem
> --
>
> Key: HADOOP-9355
> URL: https://issues.apache.org/jira/browse/HADOOP-9355
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Andrew Wang
> Attachments: hadoop-9355-wip.patch
>
>
> We'd like to run the symlink tests using both FileContext and the upcoming 
> FileSystem implementation. The first step here is abstracting the test logic 
> to run on an abstract filesystem implementation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9355) Abstract symlink tests to use either FileContext or FileSystem

2013-03-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HADOOP-9355:
---

Assignee: Andrew Wang

> Abstract symlink tests to use either FileContext or FileSystem
> --
>
> Key: HADOOP-9355
> URL: https://issues.apache.org/jira/browse/HADOOP-9355
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-9355-wip.patch
>
>
> We'd like to run the symlink tests using both FileContext and the upcoming 
> FileSystem implementation. The first step here is abstracting the test logic 
> to run on an abstract filesystem implementation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9356) remove remaining references to cygwin/cygpath from scripts

2013-03-04 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-9356:
-

 Summary: remove remaining references to cygwin/cygpath from scripts
 Key: HADOOP-9356
 URL: https://issues.apache.org/jira/browse/HADOOP-9356
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, scripts
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth


branch-trunk-win still contains a few references to Cygwin and the cygpath 
command that need to be removed now that they are no longer needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9355) Abstract symlink tests to use either FileContext or FileSystem

2013-03-04 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-9355:
---

 Summary: Abstract symlink tests to use either FileContext or 
FileSystem
 Key: HADOOP-9355
 URL: https://issues.apache.org/jira/browse/HADOOP-9355
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Andrew Wang


We'd like to run the symlink tests using both FileContext and the upcoming 
FileSystem implementation. The first step here is abstracting the test logic to 
run on an abstract filesystem implementation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9354) Windows native project files missing license headers

2013-03-04 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9354.
-

  Resolution: Fixed
Hadoop Flags: Reviewed

+1. I committed the patch to branch-trunk-win. Thank you Chris!

> Windows native project files missing license headers
> 
>
> Key: HADOOP-9354
> URL: https://issues.apache.org/jira/browse/HADOOP-9354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Attachments: HADOOP-9354-branch-trunk-win.1.patch, 
> HADOOP-9354-branch-trunk-win.2.patch
>
>
> We need to add the license header to native.sln, native.vcxproj, and 
> native.vcxproj.filters.  The equivalent files in winutils already have the 
> license headers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9354) Windows native project files missing license headers

2013-03-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9354:
--

Description: We need to add the license header to native.sln, 
native.vcxproj, and native.vcxproj.filters.  The equivalent files in winutils 
already have the license headers.  (was: We need to add the license header to 
native.sln and native.vcxproj.  winutils.sln and winutils.vcxproj already have 
it.)
Summary: Windows native project files missing license headers  (was: 
native.sln and native.vcxproj missing license header)

> Windows native project files missing license headers
> 
>
> Key: HADOOP-9354
> URL: https://issues.apache.org/jira/browse/HADOOP-9354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Attachments: HADOOP-9354-branch-trunk-win.1.patch, 
> HADOOP-9354-branch-trunk-win.2.patch
>
>
> We need to add the license header to native.sln, native.vcxproj, and 
> native.vcxproj.filters.  The equivalent files in winutils already have the 
> license headers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9354) Windows native project files missing license headers

2013-03-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9354:
--

Attachment: HADOOP-9354-branch-trunk-win.2.patch

> Windows native project files missing license headers
> 
>
> Key: HADOOP-9354
> URL: https://issues.apache.org/jira/browse/HADOOP-9354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Attachments: HADOOP-9354-branch-trunk-win.1.patch, 
> HADOOP-9354-branch-trunk-win.2.patch
>
>
> We need to add the license header to native.sln, native.vcxproj, and 
> native.vcxproj.filters.  The equivalent files in winutils already have the 
> license headers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9354) native.sln and native.vcxproj missing license header

2013-03-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9354:
--

Summary: native.sln and native.vcxproj missing license header  (was: 
native.sln missing license header)

> native.sln and native.vcxproj missing license header
> 
>
> Key: HADOOP-9354
> URL: https://issues.apache.org/jira/browse/HADOOP-9354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Attachments: HADOOP-9354-branch-trunk-win.1.patch
>
>
> We need to add the license header to native.sln.  winutils.sln already has it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9354) native.sln and native.vcxproj missing license header

2013-03-04 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592526#comment-13592526
 ] 

Suresh Srinivas commented on HADOOP-9354:
-

native.vcxproj and native.vcxproj.filters also need license header

> native.sln and native.vcxproj missing license header
> 
>
> Key: HADOOP-9354
> URL: https://issues.apache.org/jira/browse/HADOOP-9354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Attachments: HADOOP-9354-branch-trunk-win.1.patch
>
>
> We need to add the license header to native.sln and native.vcxproj.  
> winutils.sln and winutils.vcxproj already have it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9354) native.sln and native.vcxproj missing license header

2013-03-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9354:
--

Description: We need to add the license header to native.sln and 
native.vcxproj.  winutils.sln and winutils.vcxproj already have it.  (was: We 
need to add the license header to native.sln.  winutils.sln already has it.)

> native.sln and native.vcxproj missing license header
> 
>
> Key: HADOOP-9354
> URL: https://issues.apache.org/jira/browse/HADOOP-9354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Attachments: HADOOP-9354-branch-trunk-win.1.patch
>
>
> We need to add the license header to native.sln and native.vcxproj.  
> winutils.sln and winutils.vcxproj already have it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9354) native.sln missing license header

2013-03-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9354:
--

Attachment: HADOOP-9354-branch-trunk-win.1.patch

> native.sln missing license header
> -
>
> Key: HADOOP-9354
> URL: https://issues.apache.org/jira/browse/HADOOP-9354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Attachments: HADOOP-9354-branch-trunk-win.1.patch
>
>
> We need to add the license header to native.sln.  winutils.sln already has it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9354) native.sln missing license header

2013-03-04 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-9354:
-

 Summary: native.sln missing license header
 Key: HADOOP-9354
 URL: https://issues.apache.org/jira/browse/HADOOP-9354
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Trivial


We need to add the license header to native.sln.  winutils.sln already has it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8562) Enhancements to Hadoop for Windows Server and Windows Azure development and runtime environments

2013-03-04 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592445#comment-13592445
 ] 

Chris Nauroth commented on HADOOP-8562:
---

{quote}
Chris Nauroth can .sln files support inclusion of Apache license?
{quote}

Yes, we have the license header in winutils.sln, but we must have forgotten to 
add it to native.sln.  I'll prepare a patch to add it.

{quote}
Do we still need cygwin after that patch? If not shouldn't all cygpath 
occurrences be removed?
{quote}

We do not need cygwin.  I'll prepare a patch to remove the remaining 
occurrences of cygpath.


> Enhancements to Hadoop for Windows Server and Windows Azure development and 
> runtime environments
> 
>
> Key: HADOOP-8562
> URL: https://issues.apache.org/jira/browse/HADOOP-8562
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Bikas Saha
>Assignee: Bikas Saha
> Attachments: branch-trunk-win.min-notest.patch, 
> branch-trunk-win-min.patch, branch-trunk-win.min.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, test-untar.tar, test-untar.tgz
>
>
> This JIRA tracks the work that needs to be done on trunk to enable Hadoop to 
> run on Windows Server and Azure environments. This incorporates porting 
> relevant work from the similar effort on branch 1 tracked via HADOOP-8079.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8562) Enhancements to Hadoop for Windows Server and Windows Azure development and runtime environments

2013-03-04 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592442#comment-13592442
 ] 

Suresh Srinivas commented on HADOOP-8562:
-

[~shv] Thanks for review.
I think native.sln is added to list of files that should be ignored from Apache 
license text check. [~cnauroth] can .sln files support inclusion of Apache 
license?

bq. Do you still need files CHANGES.branch-trunk-win.txt? It will be 
incorporated into CHANGES.txt?
Yes. After the merge I will remove that file and merge it into CHANGES.txt as 
done for previous feature branch changes.

bq. Do we still need cygwin after that patch? If not shouldn't all cygpath 
occurrences be removed?
[~cnauroth] can you please answer this?


> Enhancements to Hadoop for Windows Server and Windows Azure development and 
> runtime environments
> 
>
> Key: HADOOP-8562
> URL: https://issues.apache.org/jira/browse/HADOOP-8562
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Bikas Saha
>Assignee: Bikas Saha
> Attachments: branch-trunk-win.min-notest.patch, 
> branch-trunk-win-min.patch, branch-trunk-win.min.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, test-untar.tar, test-untar.tgz
>
>
> This JIRA tracks the work that needs to be done on trunk to enable Hadoop to 
> run on Windows Server and Azure environments. This incorporates porting 
> relevant work from the similar effort on branch 1 tracked via HADOOP-8079.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9352) Expose UGI.setLoginUser for tests

2013-03-04 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9352:


   Resolution: Fixed
Fix Version/s: 2.0.4-beta
   0.23.7
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks.  I have committed to trunk/2/23.

> Expose UGI.setLoginUser for tests
> -
>
> Key: HADOOP-9352
> URL: https://issues.apache.org/jira/browse/HADOOP-9352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 0.23.7, 2.0.4-beta
>
> Attachments: HADOOP-9352.branch-23.patch, HADOOP-9352.patch
>
>
> The {{UGI.setLoginUser}} method is not publicly exposed, which makes it 
> impossible to correctly test code executed outside of an explicit {{doAs}}.  
> {{getCurrentUser}}/{{getLoginUser}} will always vivify the login user from 
> the user running the test, and not an arbitrary user to be determined by the 
> test.  The method is documented with why it's not ready for prime-time, but 
> it's good enough for tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9352) Expose UGI.setLoginUser for tests

2013-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592263#comment-13592263
 ] 

Hudson commented on HADOOP-9352:


Integrated in Hadoop-trunk-Commit #3407 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3407/])
HADOOP-9352. Expose UGI.setLoginUser for tests (daryn) (Revision 1452338)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1452338
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java


> Expose UGI.setLoginUser for tests
> -
>
> Key: HADOOP-9352
> URL: https://issues.apache.org/jira/browse/HADOOP-9352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 0.23.7, 2.0.4-beta
>
> Attachments: HADOOP-9352.branch-23.patch, HADOOP-9352.patch
>
>
> The {{UGI.setLoginUser}} method is not publicly exposed, which makes it 
> impossible to correctly test code executed outside of an explicit {{doAs}}.  
> {{getCurrentUser}}/{{getLoginUser}} will always vivify the login user from 
> the user running the test, and not an arbitrary user to be determined by the 
> test.  The method is documented with why it's not ready for prime-time, but 
> it's good enough for tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9117) replace protoc ant plugin exec with a maven plugin

2013-03-04 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592221#comment-13592221
 ] 

Junping Du commented on HADOOP-9117:


Hi Nicholas, did you solve the issue? I meet the same problem on Eclipse 
recently but haven't figured out how to get through.

> replace protoc ant plugin exec with a maven plugin
> --
>
> Key: HADOOP-9117
> URL: https://issues.apache.org/jira/browse/HADOOP-9117
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.4-beta
>
> Attachments: HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch, 
> HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch
>
>
> The protoc compiler is currently invoked using ant plugin exec. There is a 
> bug in the ant plugin exec task which does not consume the STDOUT or STDERR 
> appropriately making the build to stop sometimes (you need to press enter to 
> continue).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2013-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592069#comment-13592069
 ] 

Hadoop QA commented on HADOOP-8545:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12571851/HADOOP-8545-10.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 13 new 
or modified test files.

  {color:red}-1 one of tests included doesn't have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2256//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2256//console

This message is automatically generated.

> Filesystem Implementation for OpenStack Swift
> -
>
> Key: HADOOP-8545
> URL: https://issues.apache.org/jira/browse/HADOOP-8545
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.0.3-alpha, 1.1.2
>Reporter: Tim Miller
>Assignee: Dmitry Mezhensky
>  Labels: hadoop, patch
> Fix For: 1.1.2
>
> Attachments: HADOOP-8545-10.patch, HADOOP-8545-1.patch, 
> HADOOP-8545-2.patch, HADOOP-8545-3.patch, HADOOP-8545-4.patch, 
> HADOOP-8545-5.patch, HADOOP-8545-6.patch, HADOOP-8545-7.patch, 
> HADOOP-8545-8.patch, HADOOP-8545-9.patch, HADOOP-8545-javaclouds-2.patch, 
> HADOOP-8545.patch, HADOOP-8545.patch
>
>
> Add a filesystem implementation for OpenStack Swift object store, similar to 
> the one which exists today for S3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2013-03-04 Thread Dmitry Mezhensky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Mezhensky updated HADOOP-8545:
-

   Fix Version/s: 1.1.2
  Labels: hadoop patch  (was: )
Target Version/s: 1.1.1
  Status: Patch Available  (was: In Progress)

> Filesystem Implementation for OpenStack Swift
> -
>
> Key: HADOOP-8545
> URL: https://issues.apache.org/jira/browse/HADOOP-8545
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.0.3-alpha, 1.1.2
>Reporter: Tim Miller
>Assignee: Dmitry Mezhensky
>  Labels: patch, hadoop
> Fix For: 1.1.2
>
> Attachments: HADOOP-8545-10.patch, HADOOP-8545-1.patch, 
> HADOOP-8545-2.patch, HADOOP-8545-3.patch, HADOOP-8545-4.patch, 
> HADOOP-8545-5.patch, HADOOP-8545-6.patch, HADOOP-8545-7.patch, 
> HADOOP-8545-8.patch, HADOOP-8545-9.patch, HADOOP-8545-javaclouds-2.patch, 
> HADOOP-8545.patch, HADOOP-8545.patch
>
>
> Add a filesystem implementation for OpenStack Swift object store, similar to 
> the one which exists today for S3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira