[jira] [Commented] (HADOOP-9413) Introduce common utils for File#setReadable/Writable/Executable and File#canRead/Write/Execute that work cross-platform

2013-03-27 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13614973#comment-13614973
 ] 

Ivan Mitic commented on HADOOP-9413:


Thanks for the review Chris!

bq. This seems to revert the decisions made in HADOOP-8973 that checking for an 
owner match and FsAction#implies is insufficient in the presence of other 
permission mechanisms, such as ACLs. In this case though, your patch keeps this 
logic guarded behind an if (Windows) check, so it wouldn't sacrifice existing 
functionality for any existing Linux deployments that use POSIX ACLs. Just so 
I'm clear, is this a proposal that we accept it as a known limitation on 
Windows right now, with potential follow-up work to address it later (via JDK7 
or possibly JNI calls)? That seems to be implicit in the patch, but I want to 
make sure I understand.
I added a comment above asking for opinion on this one. This approach provides 
full symmetry between can* and set* functions on Windows, what I think is a 
good thing for now. I agree that we'll want to improve this eventually (this 
could be JDK7 or JNI). However, as long as the abstractions are solid, I think 
we are good. Let me know what you think.


bq. Also, can you please clarify the second part of the if condition? This is 
checking if one of the user's group memberships has the same name as the file 
owner. Is it supposed to check the file group instead of the file owner?
Thanks, will add a comment to clarify this part. On Windows, the owner can also 
be a group. One use-case is if I'm running as a member of the admins group 
(elevated), and create a file, the file will be owned by the admins group.

 Introduce common utils for File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute that work cross-platform
 ---

 Key: HADOOP-9413
 URL: https://issues.apache.org/jira/browse/HADOOP-9413
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HADOOP-9413.commonfileutils.patch


 So far, we've seen many unittest and product bugs in Hadoop on Windows 
 because Java's APIs that manipulate with permissions do not work as expected. 
 We've addressed many of these problems on one-by-one basis (by either 
 changing code a bit or disabling the test). While debugging the remaining 
 unittest failures we continue to run into the same patterns of problems, and 
 instead of addressing them one-by-one, I propose that we expose a set of 
 equivalent wrapper APIs that will work well for all platforms.
 Scanning thru the codebase, this will actually be a simple change as there 
 are very few places that use File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute (5 files in Common, 9 files in HDFS).
 HADOOP-8973 contains additional context on the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9437) TestNativeIO#testRenameTo fails on Windows due to assumption that POSIX errno is embedded in NativeIOException

2013-03-27 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13614981#comment-13614981
 ] 

Ivan Mitic commented on HADOOP-9437:


Thanks Chris for the patch.

One comment on the patch. If I'm seeing things correctly, nativeio 
implementation for rename0 uses CRT#rename which returns errno code. On the 
other hand, Windows implementation for throw_ioe assumes winerror code. Can you 
please check if the two are compatible? 


 TestNativeIO#testRenameTo fails on Windows due to assumption that POSIX errno 
 is embedded in NativeIOException
 --

 Key: HADOOP-9437
 URL: https://issues.apache.org/jira/browse/HADOOP-9437
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9437.1.patch


 HDFS-4428 added a detailed error message for failures to rename files by 
 embedding the POSIX errno in the {{NativeIOException}}.  On Windows, the 
 mapping of errno is not performed, so the errno enum value will not be 
 present in the {{NativeIOException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9430) TestSSLFactory fails on IBM JVM

2013-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615105#comment-13615105
 ] 

Hudson commented on HADOOP-9430:


Integrated in Hadoop-Yarn-trunk #168 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/168/])
HADOOP-9430. TestSSLFactory fails on IBM JVM. Contributed by Amir Sanjar. 
(Revision 1461268)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1461268
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/ReloadingX509TrustManager.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java


 TestSSLFactory fails on IBM JVM
 ---

 Key: HADOOP-9430
 URL: https://issues.apache.org/jira/browse/HADOOP-9430
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Amir Sanjar
 Fix For: 2.0.5-beta

 Attachments: HADOOP-9430-branch2.patch, HADOOP-9430.patch, 
 HADOOP-9430-trunk.patch, HADOOP-9430-trunk-v2.patch, 
 HADOOP-9430-trunk-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615106#comment-13615106
 ] 

Hudson commented on HADOOP-9194:


Integrated in Hadoop-Yarn-trunk #168 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/168/])
HADOOP-9194. RPC Support for QoS. (Junping Du via llu) (Revision 1461370)

 Result = SUCCESS
llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1461370
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java


 RPC Support for QoS
 ---

 Key: HADOOP-9194
 URL: https://issues.apache.org/jira/browse/HADOOP-9194
 Project: Hadoop Common
  Issue Type: New Feature
  Components: ipc
Affects Versions: 2.0.2-alpha
Reporter: Luke Lu
 Fix For: 3.0.0

 Attachments: HADOOP-9194.patch, HADOOP-9194-v2.patch


 One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
 We need QoS support to fight the inevitable buffer bloat (including various 
 queues, which are probably necessary for throughput) in our software stack. 
 This is important for mixed workload with different latency and throughput 
 requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
 same DFS.
 Any potential bottleneck will need to be managed by QoS mechanisms, starting 
 with RPC. 
 How about adding a one byte DS (differentiated services) field (a la the 
 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
 mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
 the header is helpful for implementing high performance QoS mechanisms in 
 switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9300) Streaming fails to set output key class when reducer is java class

2013-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615154#comment-13615154
 ] 

Hadoop QA commented on HADOOP-9300:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12570034/HADOOP-9300-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-streaming.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2370//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2370//console

This message is automatically generated.

 Streaming fails to set output key class when reducer is java class
 --

 Key: HADOOP-9300
 URL: https://issues.apache.org/jira/browse/HADOOP-9300
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: HADOOP-9300-1.patch, HADOOP-9300-2.patch, 
 HADOOP-9300-2.patch, HADOOP-9300-2.patch, HADOOP-9300.patch, HADOOP-9300.patch


 In an effort to avoid overwriting user configs (MAPREDUCE-1888), StreamJob 
 doesn't set a job's output key/value classes unless they are specified in the 
 streaming command line.  If the configs aren't specified in either of these 
 places, the streaming defaults (Text) no longer kick in, and the global 
 default LongWritable is used.
 This can cause jobs/output writers that are expecting Text to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9430) TestSSLFactory fails on IBM JVM

2013-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615205#comment-13615205
 ] 

Hudson commented on HADOOP-9430:


Integrated in Hadoop-Hdfs-trunk #1357 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1357/])
HADOOP-9430. TestSSLFactory fails on IBM JVM. Contributed by Amir Sanjar. 
(Revision 1461268)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1461268
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/ReloadingX509TrustManager.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java


 TestSSLFactory fails on IBM JVM
 ---

 Key: HADOOP-9430
 URL: https://issues.apache.org/jira/browse/HADOOP-9430
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Amir Sanjar
 Fix For: 2.0.5-beta

 Attachments: HADOOP-9430-branch2.patch, HADOOP-9430.patch, 
 HADOOP-9430-trunk.patch, HADOOP-9430-trunk-v2.patch, 
 HADOOP-9430-trunk-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615206#comment-13615206
 ] 

Hudson commented on HADOOP-9194:


Integrated in Hadoop-Hdfs-trunk #1357 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1357/])
HADOOP-9194. RPC Support for QoS. (Junping Du via llu) (Revision 1461370)

 Result = FAILURE
llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1461370
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java


 RPC Support for QoS
 ---

 Key: HADOOP-9194
 URL: https://issues.apache.org/jira/browse/HADOOP-9194
 Project: Hadoop Common
  Issue Type: New Feature
  Components: ipc
Affects Versions: 2.0.2-alpha
Reporter: Luke Lu
 Fix For: 3.0.0

 Attachments: HADOOP-9194.patch, HADOOP-9194-v2.patch


 One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
 We need QoS support to fight the inevitable buffer bloat (including various 
 queues, which are probably necessary for throughput) in our software stack. 
 This is important for mixed workload with different latency and throughput 
 requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
 same DFS.
 Any potential bottleneck will need to be managed by QoS mechanisms, starting 
 with RPC. 
 How about adding a one byte DS (differentiated services) field (a la the 
 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
 mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
 the header is helpful for implementing high performance QoS mechanisms in 
 switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9430) TestSSLFactory fails on IBM JVM

2013-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615275#comment-13615275
 ] 

Hudson commented on HADOOP-9430:


Integrated in Hadoop-Mapreduce-trunk #1385 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1385/])
HADOOP-9430. TestSSLFactory fails on IBM JVM. Contributed by Amir Sanjar. 
(Revision 1461268)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1461268
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/ReloadingX509TrustManager.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java


 TestSSLFactory fails on IBM JVM
 ---

 Key: HADOOP-9430
 URL: https://issues.apache.org/jira/browse/HADOOP-9430
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Amir Sanjar
 Fix For: 2.0.5-beta

 Attachments: HADOOP-9430-branch2.patch, HADOOP-9430.patch, 
 HADOOP-9430-trunk.patch, HADOOP-9430-trunk-v2.patch, 
 HADOOP-9430-trunk-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615277#comment-13615277
 ] 

Hudson commented on HADOOP-9194:


Integrated in Hadoop-Mapreduce-trunk #1385 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1385/])
HADOOP-9194. RPC Support for QoS. (Junping Du via llu) (Revision 1461370)

 Result = SUCCESS
llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1461370
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java


 RPC Support for QoS
 ---

 Key: HADOOP-9194
 URL: https://issues.apache.org/jira/browse/HADOOP-9194
 Project: Hadoop Common
  Issue Type: New Feature
  Components: ipc
Affects Versions: 2.0.2-alpha
Reporter: Luke Lu
 Fix For: 3.0.0

 Attachments: HADOOP-9194.patch, HADOOP-9194-v2.patch


 One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
 We need QoS support to fight the inevitable buffer bloat (including various 
 queues, which are probably necessary for throughput) in our software stack. 
 This is important for mixed workload with different latency and throughput 
 requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
 same DFS.
 Any potential bottleneck will need to be managed by QoS mechanisms, starting 
 with RPC. 
 How about adding a one byte DS (differentiated services) field (a la the 
 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
 mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
 the header is helpful for implementing high performance QoS mechanisms in 
 switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9363) AuthenticatedURL will NPE if server closes connection

2013-03-27 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615388#comment-13615388
 ] 

Daryn Sharp commented on HADOOP-9363:
-

This also occurs for unexpected kerberos errors such as a kvno version mismatch 
between the client's service ticket and the server's HTTP principal in its 
keytab.

{noformat}
Caused by: GSSException: Failure unspecified at GSS-API level (Mechanism level: 
Specified version of key is not available (44))
at 
sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:788)
at 
sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:342)
at 
sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:285)
at 
sun.security.jgss.spnego.SpNegoContext.GSS_acceptSecContext(SpNegoContext.java:871)
at 
sun.security.jgss.spnego.SpNegoContext.acceptSecContext(SpNegoContext.java:544)
at 
sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:342)
at 
sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:285)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:278)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:270)
... 23 more
Caused by: KrbException: Specified version of key is not available (44)
at sun.security.krb5.EncryptionKey.findKey(EncryptionKey.java:588)
at sun.security.krb5.KrbApReq.authenticate(KrbApReq.java:270)
at sun.security.krb5.KrbApReq.init(KrbApReq.java:144)
at 
sun.security.jgss.krb5.InitSecContextToken.init(InitSecContextToken.java:108)
at 
sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:771)
{noformat}

I sniffed the packets and the SPNEGO exchange proceeds as expected: server 
sends 401 with WWW-Authenticate header, client responds with Authorization 
header, server responds with 401 with status message set to the kerberos 
exception - client then NPEs on that response.  It's unclear (I haven't 
investigated) if it's a JDK bug, or if AuthenticatedURL's twiddling of the 
URLConnection is causing the issue.

 AuthenticatedURL will NPE if server closes connection
 -

 Key: HADOOP-9363
 URL: https://issues.apache.org/jira/browse/HADOOP-9363
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp

 A NPE occurs if the server unexpectedly closes the connection for an 
 {{AuthenticatedURL}} w/o sending a response.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9435) Native build hadoop-common-project fails on $JAVA_HOME/include/jni_md.h using ibm java

2013-03-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615477#comment-13615477
 ] 

Colin Patrick McCabe commented on HADOOP-9435:
--

seems reasonable to me.

 Native build hadoop-common-project fails on $JAVA_HOME/include/jni_md.h using 
 ibm java
 --

 Key: HADOOP-9435
 URL: https://issues.apache.org/jira/browse/HADOOP-9435
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9435.patch


 When native build hadoop-common-project with IBM java using command like: 
 mvn package -Pnative
 it will exist the following errors.
  [exec] CMake Error at JNIFlags.cmake:113 (MESSAGE):
  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
  [exec] Call Stack (most recent call first):
  [exec]   CMakeLists.txt:24 (include)
  [exec] 
  [exec] 
  [exec] -- Configuring incomplete, errors occurred!
 The reason is that IBM java uses $JAVA_HOME/include/jniport.h instead of 
 $JAVA_HOME/include/jni_md.h in Oracle java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9413) Introduce common utils for File#setReadable/Writable/Executable and File#canRead/Write/Execute that work cross-platform

2013-03-27 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615506#comment-13615506
 ] 

Bikas Saha commented on HADOOP-9413:


Chris already has code that does the expected thing for the scenario in which 
the running process is checking whether it has read/write/execute permissions 
on a directory. We could move them into helper functions and use them. This is 
important because after this check is successful the process goes ahead and 
performs the action that depends on the check. So my preference would be to use 
the code that provides the expected functionality. We can improve that code 
later on.

 Introduce common utils for File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute that work cross-platform
 ---

 Key: HADOOP-9413
 URL: https://issues.apache.org/jira/browse/HADOOP-9413
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HADOOP-9413.commonfileutils.patch


 So far, we've seen many unittest and product bugs in Hadoop on Windows 
 because Java's APIs that manipulate with permissions do not work as expected. 
 We've addressed many of these problems on one-by-one basis (by either 
 changing code a bit or disabling the test). While debugging the remaining 
 unittest failures we continue to run into the same patterns of problems, and 
 instead of addressing them one-by-one, I propose that we expose a set of 
 equivalent wrapper APIs that will work well for all platforms.
 Scanning thru the codebase, this will actually be a simple change as there 
 are very few places that use File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute (5 files in Common, 9 files in HDFS).
 HADOOP-8973 contains additional context on the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2013-03-27 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-8415:


Fix Version/s: (was: 3.0.0)
   2.0.5-beta

I just merged this into branch-2 (in the context of YARN-474).

 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2013-03-27 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reassigned HADOOP-8415:
---

Assignee: Jan van der Lugt

 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Assignee: Jan van der Lugt
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2013-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615527#comment-13615527
 ] 

Hudson commented on HADOOP-8415:


Integrated in Hadoop-trunk-Commit #3536 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3536/])
HADOOP-8415. Add getDouble() and setDouble() in 
org.apache.hadoop.conf.Configuration (Jan van der Lugt via harsh)
Merging into branch-2. Updating CHANGES.txt (Revision 1461727)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1461727
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Assignee: Jan van der Lugt
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-03-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe moved HDFS-4640 to HADOOP-9439:


  Component/s: (was: security)
   native
 Target Version/s:   (was: 2.0.5-beta)
Affects Version/s: (was: 2.0.4-alpha)
   2.0.4-alpha
  Key: HADOOP-9439  (was: HDFS-4640)
  Project: Hadoop Common  (was: Hadoop HDFS)

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-03-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615542#comment-13615542
 ] 

Colin Patrick McCabe commented on HADOOP-9439:
--

moved this to common at ATM's suggestion.

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9413) Introduce common utils for File#setReadable/Writable/Executable and File#canRead/Write/Execute that work cross-platform

2013-03-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615556#comment-13615556
 ] 

Chris Nauroth commented on HADOOP-9413:
---

{quote}
However, as long as the abstractions are solid, I think we are good. Let me 
know what you think.
{quote}

Yes, I am definitely in favor of the structure of this patch: providing common 
utility wrappers for these methods.  I think the only open question is whether 
the implementation should revert the logic of HADOOP-8973 like this.

{quote}
Chris already has code that does the expected thing for the scenario in which 
the running process is checking whether it has read/write/execute permissions 
on a directory.
{quote}

A couple of caveats on this:

# The code I wrote for HADOOP-8973 works only for directories, and we need 
these common APIs to handle both files and directories.  See Ivan's patches on 
HDFS-4610 and YARN-506 for some examples where the codebase needs to get/set 
permissions on individual files.
# There was also the question of performance, because my code actually 
performed file access and forked new processes to implement the access checks.

These were not problems for HADOOP-8973, because {{DiskChecker}} is used only 
for directories and it's used rarely enough that the performance impact would 
likely be unnoticeable.

I'm wondering if it's time for us to write a JNI call to {{AccessCheck}}:

http://msdn.microsoft.com/en-us/library/windows/desktop/aa374815(v=vs.85).aspx

This would only be used if (Windows  JDKversion  7).  Otherwise, we expect 
the JDK APIs to work.

Obviously, this will take longer to implement.  A potential compromise would be 
to go with the patch's current implementation of the new functions, but leave 
{{DiskChecker}} as is temporarily, just until we get the JNI call written.  
That way, the other permission checks in the codebase get pretty close to 
working (not 100% correct, but certainly better than the current state), and we 
still maintain full correctness for the very important {{DiskChecker}} piece.

Ivan and Bikas, do you have any follow-up thoughts?


 Introduce common utils for File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute that work cross-platform
 ---

 Key: HADOOP-9413
 URL: https://issues.apache.org/jira/browse/HADOOP-9413
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HADOOP-9413.commonfileutils.patch


 So far, we've seen many unittest and product bugs in Hadoop on Windows 
 because Java's APIs that manipulate with permissions do not work as expected. 
 We've addressed many of these problems on one-by-one basis (by either 
 changing code a bit or disabling the test). While debugging the remaining 
 unittest failures we continue to run into the same patterns of problems, and 
 instead of addressing them one-by-one, I propose that we expose a set of 
 equivalent wrapper APIs that will work well for all platforms.
 Scanning thru the codebase, this will actually be a simple change as there 
 are very few places that use File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute (5 files in Common, 9 files in HDFS).
 HADOOP-8973 contains additional context on the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-03-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9439:
-

Attachment: HADOOP-9439.001.patch

add the ability to configure JniBasedUnixGroupsMapping to use the non-reentrant 
getpwent and getgrent functions at runtime.  They are buggy on some 
implementations.

pass in an empty array in a threadsafe manner, duplicating the optimization we 
had before where we return a pre-allocated empty array when there are no groups 
to be found.

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-03-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615592#comment-13615592
 ] 

Chris Nauroth commented on HADOOP-9439:
---

Hi, Colin.  I had filed HADOOP-9312 for a potential memory leak around the lazy 
initialization of {{emptyGroups}}.  From my quick scan of the patch here, it 
looks like this will fix it (at least for the non-Windows codebase).  Do you 
agree?  If so, I will relate the 2 jiras, or perhaps file a new jira for 
porting this patch to the Windows version.  Thanks!

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615601#comment-13615601
 ] 

Hadoop QA commented on HADOOP-9439:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12575606/HDFS-4640.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2371//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2371//console

This message is automatically generated.

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-03-27 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615612#comment-13615612
 ] 

Todd Lipcon commented on HADOOP-9150:
-

Per above, the javac warnings are because the unit test spies on the DNS 
resolver.

 Unnecessary DNS resolution attempts for logical URIs
 

 Key: HADOOP-9150
 URL: https://issues.apache.org/jira/browse/HADOOP-9150
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3, ha, performance, viewfs
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, log.txt, 
 tracing-resolver.tgz


 In the FileSystem code, we accidentally try to DNS-resolve the logical name 
 before it is converted to an actual domain name. In some DNS setups, this can 
 cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
 terasort throughput, since every task wasted a lot of time waiting for slow 
 not found responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9358) Auth failed log should include exception string

2013-03-27 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9358:


   Resolution: Fixed
Fix Version/s: 2.0.5-beta
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to branch-2 and trunk. No tests since it's just a log change.

 Auth failed log should include exception string
 -

 Key: HADOOP-9358
 URL: https://issues.apache.org/jira/browse/HADOOP-9358
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.0.5-beta
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: hadoop-9385.txt


 Currently, when authentication fails, we see a WARN message like:
 {code}
 2013-02-28 22:49:03,152 WARN  ipc.Server 
 (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null
 {code}
 This is not useful to understand the underlying cause. The WARN entry should 
 additionally include the exception text, eg:
 {code}
 2013-02-28 22:49:03,152 WARN  ipc.Server 
 (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null 
 (GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API 
 level (Mechanism level: Request is a replay (34))])
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9436) NetgroupCache does not refresh membership correctly

2013-03-27 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-9436:
---

Attachment: HADOOP-9436.patch

 NetgroupCache does not refresh membership correctly
 ---

 Key: HADOOP-9436
 URL: https://issues.apache.org/jira/browse/HADOOP-9436
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-9436.patch


 NetgroupCache is used to get around the problem of inability to obtain a 
 single user-to-groups mapping from netgroup. For example, the ACL code 
 pre-populates this cache, so that any user-group mapping can be resolved for 
 all groups defined in the service.
 However, the current refresh code only adds users to existing groups, so a 
 loss of group membership won't take effect. This is because the internal 
 user-groups mapping cache is never invalidated. If this is simply invalidated 
 on clear(), the cache entries will build up correctly, but user-group 
 resolution may fail during refresh, resulting in incorrectly denying accesses.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9436) NetgroupCache does not refresh membership correctly

2013-03-27 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-9436:
---

Status: Patch Available  (was: Open)

 NetgroupCache does not refresh membership correctly
 ---

 Key: HADOOP-9436
 URL: https://issues.apache.org/jira/browse/HADOOP-9436
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha, 3.0.0, 0.23.7
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-9436.patch


 NetgroupCache is used to get around the problem of inability to obtain a 
 single user-to-groups mapping from netgroup. For example, the ACL code 
 pre-populates this cache, so that any user-group mapping can be resolved for 
 all groups defined in the service.
 However, the current refresh code only adds users to existing groups, so a 
 loss of group membership won't take effect. This is because the internal 
 user-groups mapping cache is never invalidated. If this is simply invalidated 
 on clear(), the cache entries will build up correctly, but user-group 
 resolution may fail during refresh, resulting in incorrectly denying accesses.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-03-27 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615648#comment-13615648
 ] 

Aaron T. Myers commented on HADOOP-9150:


+1, the latest patch looks good to me.

 Unnecessary DNS resolution attempts for logical URIs
 

 Key: HADOOP-9150
 URL: https://issues.apache.org/jira/browse/HADOOP-9150
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3, ha, performance, viewfs
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, log.txt, 
 tracing-resolver.tgz


 In the FileSystem code, we accidentally try to DNS-resolve the logical name 
 before it is converted to an actual domain name. In some DNS setups, this can 
 cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
 terasort throughput, since every task wasted a lot of time waiting for slow 
 not found responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9436) NetgroupCache does not refresh membership correctly

2013-03-27 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615663#comment-13615663
 ] 

Kihwal Lee commented on HADOOP-9436:


I tried to minimize contention in the patch. From a comment section in the 
patch:

{code}
   * There are three different points where concurrent accesses happen.
   * 1) User-to-groups mapping: This is thread safe data structure, so
   *putting and getting don't need to make them safe.
   * 2) cachedGroups: This needs to be protected from concurrent accesses.
   *Also, the content needs to be kept in sync with 1).
   *Synchronize on cachedGroups whenever updating mappings.
   * 3) refresh: Refresh requests need to be serialized.
   *Caller to be synchronized on refreshLock.
   *
   * By separating into three, cache lookups are never explicitly blocked.
   * Regular cache add activities and refresh can mostly overlap. 
{code}

Improved:
* Refresh does not leave removed users in the cache.
* No more direct modification of values in the map. It wasn't thread safe.
* No more rebuild-from-scratch whenever a group is added.
* Addition of fine-grained locking.

 NetgroupCache does not refresh membership correctly
 ---

 Key: HADOOP-9436
 URL: https://issues.apache.org/jira/browse/HADOOP-9436
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-9436.patch


 NetgroupCache is used to get around the problem of inability to obtain a 
 single user-to-groups mapping from netgroup. For example, the ACL code 
 pre-populates this cache, so that any user-group mapping can be resolved for 
 all groups defined in the service.
 However, the current refresh code only adds users to existing groups, so a 
 loss of group membership won't take effect. This is because the internal 
 user-groups mapping cache is never invalidated. If this is simply invalidated 
 on clear(), the cache entries will build up correctly, but user-group 
 resolution may fail during refresh, resulting in incorrectly denying accesses.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9125) LdapGroupsMapping threw CommunicationException after some idle time

2013-03-27 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers reassigned HADOOP-9125:
--

Assignee: Kai Zheng

 LdapGroupsMapping threw CommunicationException after some idle time
 ---

 Key: HADOOP-9125
 URL: https://issues.apache.org/jira/browse/HADOOP-9125
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-9125.patch, HADOOP-9125.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 LdapGroupsMapping threw exception as below after some idle time. During the 
 idle time no call to the group mapping provider should be made to repeat it.
 2012-12-07 02:20:59,738 WARN org.apache.hadoop.security.LdapGroupsMapping: 
 Exception trying to get groups for user aduser2
 javax.naming.CommunicationException: connection closed [Root exception is 
 java.io.IOException: connection closed]; remaining name 
 'CN=Users,DC=EXAMPLE,DC=COM'
 at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1983)
 at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1827)
 at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1752)
 at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1769)
 at 
 com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:394)
 at 
 com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:376)
 at 
 com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:358)
 at 
 javax.naming.directory.InitialDirContext.search(InitialDirContext.java:267)
 at 
 org.apache.hadoop.security.LdapGroupsMapping.getGroups(LdapGroupsMapping.java:187)
 at 
 org.apache.hadoop.security.CompositeGroupsMapping.getGroups(CompositeGroupsMapping.java:97)
 at org.apache.hadoop.security.Groups.doGetGroups(Groups.java:103)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:70)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1035)
 at org.apache.hadoop.hbase.security.User.getGroupNames(User.java:90)
 at 
 org.apache.hadoop.hbase.security.access.TableAuthManager.authorize(TableAuthManager.java:355)
 at 
 org.apache.hadoop.hbase.security.access.AccessController.requirePermission(AccessController.java:379)
 at 
 org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:1051)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java:4914)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java:3546)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:372)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1399)
 Caused by: java.io.IOException: connection closed
 at com.sun.jndi.ldap.LdapClient.ensureOpen(LdapClient.java:1558)
 at com.sun.jndi.ldap.LdapClient.search(LdapClient.java:503)
 at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1965)
 ... 28 more
 2012-12-07 02:20:59,739 WARN org.apache.hadoop.security.UserGroupInformation: 
 No groups available for user aduser2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9125) LdapGroupsMapping threw CommunicationException after some idle time

2013-03-27 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615791#comment-13615791
 ] 

Aaron T. Myers commented on HADOOP-9125:


+1, the patch looks good to me. I'm going to commit this momentarily.

 LdapGroupsMapping threw CommunicationException after some idle time
 ---

 Key: HADOOP-9125
 URL: https://issues.apache.org/jira/browse/HADOOP-9125
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-9125.patch, HADOOP-9125.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 LdapGroupsMapping threw exception as below after some idle time. During the 
 idle time no call to the group mapping provider should be made to repeat it.
 2012-12-07 02:20:59,738 WARN org.apache.hadoop.security.LdapGroupsMapping: 
 Exception trying to get groups for user aduser2
 javax.naming.CommunicationException: connection closed [Root exception is 
 java.io.IOException: connection closed]; remaining name 
 'CN=Users,DC=EXAMPLE,DC=COM'
 at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1983)
 at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1827)
 at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1752)
 at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1769)
 at 
 com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:394)
 at 
 com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:376)
 at 
 com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:358)
 at 
 javax.naming.directory.InitialDirContext.search(InitialDirContext.java:267)
 at 
 org.apache.hadoop.security.LdapGroupsMapping.getGroups(LdapGroupsMapping.java:187)
 at 
 org.apache.hadoop.security.CompositeGroupsMapping.getGroups(CompositeGroupsMapping.java:97)
 at org.apache.hadoop.security.Groups.doGetGroups(Groups.java:103)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:70)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1035)
 at org.apache.hadoop.hbase.security.User.getGroupNames(User.java:90)
 at 
 org.apache.hadoop.hbase.security.access.TableAuthManager.authorize(TableAuthManager.java:355)
 at 
 org.apache.hadoop.hbase.security.access.AccessController.requirePermission(AccessController.java:379)
 at 
 org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:1051)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java:4914)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java:3546)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:372)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1399)
 Caused by: java.io.IOException: connection closed
 at com.sun.jndi.ldap.LdapClient.ensureOpen(LdapClient.java:1558)
 at com.sun.jndi.ldap.LdapClient.search(LdapClient.java:503)
 at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1965)
 ... 28 more
 2012-12-07 02:20:59,739 WARN org.apache.hadoop.security.UserGroupInformation: 
 No groups available for user aduser2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9125) LdapGroupsMapping threw CommunicationException after some idle time

2013-03-27 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-9125:
---

   Resolution: Fixed
Fix Version/s: 2.0.5-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks a lot for the contribution, Kai.

 LdapGroupsMapping threw CommunicationException after some idle time
 ---

 Key: HADOOP-9125
 URL: https://issues.apache.org/jira/browse/HADOOP-9125
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: 2.0.5-beta

 Attachments: HADOOP-9125.patch, HADOOP-9125.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 LdapGroupsMapping threw exception as below after some idle time. During the 
 idle time no call to the group mapping provider should be made to repeat it.
 2012-12-07 02:20:59,738 WARN org.apache.hadoop.security.LdapGroupsMapping: 
 Exception trying to get groups for user aduser2
 javax.naming.CommunicationException: connection closed [Root exception is 
 java.io.IOException: connection closed]; remaining name 
 'CN=Users,DC=EXAMPLE,DC=COM'
 at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1983)
 at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1827)
 at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1752)
 at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1769)
 at 
 com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:394)
 at 
 com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:376)
 at 
 com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:358)
 at 
 javax.naming.directory.InitialDirContext.search(InitialDirContext.java:267)
 at 
 org.apache.hadoop.security.LdapGroupsMapping.getGroups(LdapGroupsMapping.java:187)
 at 
 org.apache.hadoop.security.CompositeGroupsMapping.getGroups(CompositeGroupsMapping.java:97)
 at org.apache.hadoop.security.Groups.doGetGroups(Groups.java:103)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:70)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1035)
 at org.apache.hadoop.hbase.security.User.getGroupNames(User.java:90)
 at 
 org.apache.hadoop.hbase.security.access.TableAuthManager.authorize(TableAuthManager.java:355)
 at 
 org.apache.hadoop.hbase.security.access.AccessController.requirePermission(AccessController.java:379)
 at 
 org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:1051)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java:4914)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java:3546)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:372)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1399)
 Caused by: java.io.IOException: connection closed
 at com.sun.jndi.ldap.LdapClient.ensureOpen(LdapClient.java:1558)
 at com.sun.jndi.ldap.LdapClient.search(LdapClient.java:503)
 at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1965)
 ... 28 more
 2012-12-07 02:20:59,739 WARN org.apache.hadoop.security.UserGroupInformation: 
 No groups available for user aduser2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-27 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615885#comment-13615885
 ] 

Eli Collins commented on HADOOP-9357:
-

+1 looks great

 Fallback to default authority if not specified in FileContext
 -

 Key: HADOOP-9357
 URL: https://issues.apache.org/jira/browse/HADOOP-9357
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 3.0.0

 Attachments: hadoop-9357-1.patch, hadoop-9357-2.patch, 
 hadoop-9357-3.patch


 Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
 parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
 hdfs:///tmp, FileContext will error while FileSystem will add the authority 
 of the default FS (e.g. turn it into hdfs://defaultNN:port/tmp). 
 This is technically correct, but FileSystem's behavior is nicer for users and 
 okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
 {noformat}
 For backwards
 compatibility, an implementation may work around such references
 by removing the scheme if it matches that of the base URI and the
 scheme is known to always use the  syntax.  The parser
 can then continue with the steps below for the remainder of the
 reference components.  Validating parsers should mark such a
 misformed relative reference as an error.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-27 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-9357:


Fix Version/s: (was: 3.0.0)
   2.0.4-alpha
 Hadoop Flags: Reviewed

I've committed this and merged to branch-2. Thanks Andrew!

 Fallback to default authority if not specified in FileContext
 -

 Key: HADOOP-9357
 URL: https://issues.apache.org/jira/browse/HADOOP-9357
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.4-alpha

 Attachments: hadoop-9357-1.patch, hadoop-9357-2.patch, 
 hadoop-9357-3.patch


 Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
 parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
 hdfs:///tmp, FileContext will error while FileSystem will add the authority 
 of the default FS (e.g. turn it into hdfs://defaultNN:port/tmp). 
 This is technically correct, but FileSystem's behavior is nicer for users and 
 okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
 {noformat}
 For backwards
 compatibility, an implementation may work around such references
 by removing the scheme if it matches that of the base URI and the
 scheme is known to always use the  syntax.  The parser
 can then continue with the steps below for the remainder of the
 reference components.  Validating parsers should mark such a
 misformed relative reference as an error.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9437) TestNativeIO#testRenameTo fails on Windows due to assumption that POSIX errno is embedded in NativeIOException

2013-03-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9437:
--

Attachment: HADOOP-9437.2.patch

{quote}
Can you please check if the two are compatible?
{quote}

Ivan, thank you for asking about this.  I had thought that the two were 
compatible at least for the rename function (though not all CRT functions).  
Your question prompted me to double-check, and it turns out that I was 
mistaken.  They are not compatible.

Here is a new version of the patch that uses conditional compilation to pass 
the return value of {{GetLastError}} to {{throw_ioe}} when running on Windows.  
I retested this on Windows and Linux with native build.  I confirmed that 
{{rename}} is setting the value of {{GetLastError}} when it fails.


 TestNativeIO#testRenameTo fails on Windows due to assumption that POSIX errno 
 is embedded in NativeIOException
 --

 Key: HADOOP-9437
 URL: https://issues.apache.org/jira/browse/HADOOP-9437
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9437.1.patch, HADOOP-9437.2.patch


 HDFS-4428 added a detailed error message for failures to rename files by 
 embedding the POSIX errno in the {{NativeIOException}}.  On Windows, the 
 mapping of errno is not performed, so the errno enum value will not be 
 present in the {{NativeIOException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-03-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615948#comment-13615948
 ] 

Colin Patrick McCabe commented on HADOOP-9439:
--

yeah, this will definitely fix the memory leak identified in HADOOP-9312.

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9435) Native build hadoop-common-project fails on $JAVA_HOME/include/jni_md.h using ibm java

2013-03-27 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615982#comment-13615982
 ] 

Tian Hong Wang commented on HADOOP-9435:


Thanks Colin for your comments. But why no Hadoop QA info when submitting the 
patch?

 Native build hadoop-common-project fails on $JAVA_HOME/include/jni_md.h using 
 ibm java
 --

 Key: HADOOP-9435
 URL: https://issues.apache.org/jira/browse/HADOOP-9435
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9435.patch


 When native build hadoop-common-project with IBM java using command like: 
 mvn package -Pnative
 it will exist the following errors.
  [exec] CMake Error at JNIFlags.cmake:113 (MESSAGE):
  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
  [exec] Call Stack (most recent call first):
  [exec]   CMakeLists.txt:24 (include)
  [exec] 
  [exec] 
  [exec] -- Configuring incomplete, errors occurred!
 The reason is that IBM java uses $JAVA_HOME/include/jniport.h instead of 
 $JAVA_HOME/include/jni_md.h in Oracle java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Add full length to SASL response to allow non-blocking readers

2013-03-27 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616078#comment-13616078
 ] 

Devaraj Das commented on HADOOP-9421:
-

This makes sense to me..

 Add full length to SASL response to allow non-blocking readers
 --

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira