[jira] [Updated] (HADOOP-9716) Move the Rpc request call ID generation to client side InvocationHandler

2013-07-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9716:


Issue Type: Improvement  (was: Bug)

> Move the Rpc request call ID generation to client side InvocationHandler
> 
>
> Key: HADOOP-9716
> URL: https://issues.apache.org/jira/browse/HADOOP-9716
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Suresh Srinivas
>
> Currently when RetryInvocationHandler is used to retry an RPC request, a new 
> RPC request call ID is generated. This jira proposes moving call ID 
> generation to InvocationHandler so that retried RPC requests retain the same 
> call ID. This is needed for RetryCache functionality proposed in HDFS-4942.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9716) Move the Rpc request call ID generation to client side InvocationHandler

2013-07-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas moved HDFS-4973 to HADOOP-9716:
---

Key: HADOOP-9716  (was: HDFS-4973)
Project: Hadoop Common  (was: Hadoop HDFS)

> Move the Rpc request call ID generation to client side InvocationHandler
> 
>
> Key: HADOOP-9716
> URL: https://issues.apache.org/jira/browse/HADOOP-9716
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suresh Srinivas
>
> Currently when RetryInvocationHandler is used to retry an RPC request, a new 
> RPC request call ID is generated. This jira proposes moving call ID 
> generation to InvocationHandler so that retried RPC requests retain the same 
> call ID. This is needed for RetryCache functionality proposed in HDFS-4942.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9716) Move the Rpc request call ID generation to client side InvocationHandler

2013-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704294#comment-13704294
 ] 

Suresh Srinivas commented on HADOOP-9716:
-

Current chain of calls is for method invocation is:
* Method is invoked on proxy
* It passes it to InvocationHandler.
* Method invocation delegates it to Invoker
* Invoker creates a new RPC request, which results in new call with a new call 
ID

In this jira, I propose making the following change:
* InvocationHandler generates a new call ID before method.invoke() call.
* Stores the call ID in a ThreadLocal variable in {{Client}} class. It also 
stores ThreadLocal variable in {{Client}} class for number of retry attempts.
* {{Client#call}} method simply uses the generated callID.
* InvocationHandler does not generate new call ID for retry attempts.


> Move the Rpc request call ID generation to client side InvocationHandler
> 
>
> Key: HADOOP-9716
> URL: https://issues.apache.org/jira/browse/HADOOP-9716
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Suresh Srinivas
>
> Currently when RetryInvocationHandler is used to retry an RPC request, a new 
> RPC request call ID is generated. This jira proposes moving call ID 
> generation to InvocationHandler so that retried RPC requests retain the same 
> call ID. This is needed for RetryCache functionality proposed in HDFS-4942.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9717) Add retry flag/retry attempt count to the RPC requests

2013-07-10 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HADOOP-9717:
---

 Summary: Add retry flag/retry attempt count to the RPC requests
 Key: HADOOP-9717
 URL: https://issues.apache.org/jira/browse/HADOOP-9717
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Suresh Srinivas


RetryCache lookup on server side implementation can be optimized if Rpc request 
indicates if the request is being retried. This jira proposes adding an 
optional field to Rpc request that indicates if request is being retried.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9717) Add retry flag/retry attempt count to the RPC requests

2013-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704300#comment-13704300
 ] 

Suresh Srinivas commented on HADOOP-9717:
-

There are two choices on how the retry information can be conveyed from the 
client to the server:
# Add a boolean field "retry" which is set to false for first attempt and true 
for subsequent retries.
# Add an int (or byte?) field "retryCount" which is set to 0 for the first 
attempt and then to retry count for retried requests.

I prefer the option 2. Comments?

> Add retry flag/retry attempt count to the RPC requests
> --
>
> Key: HADOOP-9717
> URL: https://issues.apache.org/jira/browse/HADOOP-9717
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Suresh Srinivas
>
> RetryCache lookup on server side implementation can be optimized if Rpc 
> request indicates if the request is being retried. This jira proposes adding 
> an optional field to Rpc request that indicates if request is being retried.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9601) Support native CRC on byte arrays

2013-07-10 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704299#comment-13704299
 ] 

Gopal V commented on HADOOP-9601:
-

The GET_ARRAYS() is a macro because it assigns to 4 local variables & needs six 
other local args. Making it a function will not make it any more readable or 
simpler to understand.

And RELEASE_ARRAYS() is a macro simply because the other one is.

Will check for t2 == t1 for the unit test. But the likelihood of hitting that 
is rather rare because we're checksumming 512Mb of data in the loop.

> Support native CRC on byte arrays
> -
>
> Key: HADOOP-9601
> URL: https://issues.apache.org/jira/browse/HADOOP-9601
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Gopal V
>  Labels: perfomance
> Attachments: HADOOP-9601-bench.patch, 
> HADOOP-9601-rebase+benchmark.patch, HADOOP-9601-trunk-rebase-2.patch, 
> HADOOP-9601-trunk-rebase.patch, HADOOP-9601-WIP-01.patch, 
> HADOOP-9601-WIP-02.patch
>
>
> When we first implemented the Native CRC code, we only did so for direct byte 
> buffers, because these correspond directly to native heap memory and thus 
> make it easy to access via JNI. We'd generally assumed that accessing byte[] 
> arrays from JNI was not efficient enough, but now that I know more about JNI 
> I don't think that's true -- we just need to make sure that the critical 
> sections where we lock the buffers are short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704333#comment-13704333
 ] 

Suresh Srinivas commented on HADOOP-8873:
-

Can you please reply to the comment I posted? HADOOP-8551 was an incompatible 
change. If this port makes an incompatible change, I do not think we can get 
this into release 1.x.

> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9717) Add retry flag/retry attempt count to the RPC requests

2013-07-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704336#comment-13704336
 ] 

Steve Loughran commented on HADOOP-9717:


a byte is fairly compact and would let you track some multi-retry problems if 
you are packet sampling; client just needs to stop incrementing once it is at 
0xff.

> Add retry flag/retry attempt count to the RPC requests
> --
>
> Key: HADOOP-9717
> URL: https://issues.apache.org/jira/browse/HADOOP-9717
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Suresh Srinivas
>
> RetryCache lookup on server side implementation can be optimized if Rpc 
> request indicates if the request is being retried. This jira proposes adding 
> an optional field to Rpc request that indicates if request is being retried.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9716) Move the Rpc request call ID generation to client side InvocationHandler

2013-07-10 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-9716:
---

Component/s: ipc
   Assignee: Tsz Wo (Nicholas), SZE

> Move the Rpc request call ID generation to client side InvocationHandler
> 
>
> Key: HADOOP-9716
> URL: https://issues.apache.org/jira/browse/HADOOP-9716
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Suresh Srinivas
>Assignee: Tsz Wo (Nicholas), SZE
>
> Currently when RetryInvocationHandler is used to retry an RPC request, a new 
> RPC request call ID is generated. This jira proposes moving call ID 
> generation to InvocationHandler so that retried RPC requests retain the same 
> call ID. This is needed for RetryCache functionality proposed in HDFS-4942.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9717) Add retry flag/retry attempt count to the RPC requests

2013-07-10 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-9717:
---

Component/s: ipc
   Assignee: Tsz Wo (Nicholas), SZE

> Add retry flag/retry attempt count to the RPC requests
> --
>
> Key: HADOOP-9717
> URL: https://issues.apache.org/jira/browse/HADOOP-9717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Suresh Srinivas
>Assignee: Tsz Wo (Nicholas), SZE
>
> RetryCache lookup on server side implementation can be optimized if Rpc 
> request indicates if the request is being retried. This jira proposes adding 
> an optional field to Rpc request that indicates if request is being retried.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9392) Token based authentication and Single Sign On

2013-07-10 Thread Tianyou Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704360#comment-13704360
 ] 

Tianyou Li commented on HADOOP-9392:


Hi Brian,

Thanks for reviewing and providing feedback on the design. You have asked some 
good questions so let me try to add some more context on the design choices and 
why we made them. Hopefully this additional context will shed some clarity. 
Please feel free to ask if you still have questions or concerns.

> 1. The new diagram (p. 3) that describes client/TAS/AS/IdP/Hadoop Services 
> interaction shows a client providing credentials to TAS, which then provides 
> the credentials to the IdP. From a security perspective, this seems like a 
> bad idea. It defeats the purpose of having an IdP in the first place. Is this 
> an oversight or by design?
 
>From client point of view, the TAS should be trusted by client for 
>authentication, whether or not client credentials can be passed to TAS 
>directly depends on the IdP’s capability and the deployment decisions etc. If 
>IdP can generate a token and is federated with TAS, then the token can be used 
>to authenticate with TAS to generate identity token in Hadoop cluster. If IdP 
>does not have the capability of generate trusted token e.g. LDAP, then there 
>can be several alternate solutions that depends on the deployment scenario.

The first scenario is TAS and IdP are deployed in the same organization in the 
same network, TAS can access IdP directly, in this scenario credentials are 
passed to TAS securely (over ssl) and then TAS pass the credential to IdP like 
LDAP. The second scenario is TAS and IdP are deployed separately in different 
network, TAS cannot contact the IdP directly, for example LDAP server is 
resident inside of enterprise and TAS is deployed in the cloud, and client is 
trying to access cluster from enterprise. In this scenario, an agent trusted by 
client can be deployed to collect client credentials, pass them to LDAP (aka 
the IdP), and generate token to external TAS to complete the authentication 
process. This agent can be another TAS as well. The third scenario is similar 
to the second scenario but the only difference is client is trying to access 
cluster from public network for example cloud environment, but need to used 
enterprise LDAP as IdP. In this scenario, an agent (can be TAS) needs to be 
deployed as gateway on the enterprise side to collect credentials.

In any of the above scenario, for an IdP without the capability to generate 
token as a result of the authentication, TAS can be the agent trusted by client 
to collect credentials for first mile authentication. As a result of above 
consideration, we draw the flow as it shows in page 3.

> 2. I'm not sure I understand why AS is necessary. It seems to complicate the 
> design by adding an unnecessary authorization check - authorization 
> can/should happen at individual Hadoop services based on token attributes. I 
> think you have mentioned before that authorization (with AS in place) would 
> happen at both places (some level of authz at AS and finer grained authz at 
> services). Can you elaborate on what value that adds over doing authz at 
> services only? And, can you provide an example of what authz checks would 
> happen at each place? (Say I access NameNode. What authz checks are done at 
> AS and what is done at the service?)
 
I would like to agree with you that authorization can be pushed into service 
side but having a centralized authorization has some advantages. For example: 
any authZ policy changes can be enforced immediately instead of waiting for the 
policy sync to each service. This also provides a centralized place for 
auditing client access. The centralized authZ acts much like the service level 
authZ except it’s centralized for reasons I just mentioned. (In the scenario 
you mentioned, if you went to access HDFS service, you need to have access 
token granted with authZ policy defined, once you have the access token you 
have access to the HDFS service but that does not mean you can access any file 
in HDFS, the file/directory level access control is done by HDFS itself.)
 
> 3. I believe this has been mentioned before, but the scope of this document 
> makes it very difficult to move forward with contributing code. It would be 
> very helpful to understand how you envision breaking this down into work 
> items that the community can pick up (I think this is what the DISCUSS thread 
> on common-dev was attempting to do).

This one I am trying to understand a little better. Please help me understand 
what you mean by “… scope of this document makes it very difficult to move 
forward with contributing code.”? If we were to breakdown the jira in to a 
number of sub-tasks based on the document would that be helpful?

Regards.


> Token based authentication and Single Sign On
> --

[jira] [Commented] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-07-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704380#comment-13704380
 ] 

Steve Loughran commented on HADOOP-9438:


looking at where the code uses this, its a few places in YARN that tend to go

{code}
if (!fc.exists(path)) {
 try {
 fc.mkdirs(path)
 //some stuff to set permissions up
 ...
 } catch (FileAlreadyExistsException fae) {
  //noop
}

{code}

This catch actually ignores a potentially serious problem -parent path elements 
not being directories

# all FileSystem implementations return true if, at the end of the operation, 
the directory exists. This isn't in the javadocs of {{FileSystem.mkdirs()}} 
yet, but I plan to add it. In fact, if you look in 
{{FSNameSystem.mkdirsInternal()}} the fact that mkdirs() is always expected to 
succeed is called out

{code}
  // all the users of mkdirs() are used to expect 'true' even if
  // a new directory is not created.
{code}

If you look at the implementations of {{FileSystem.mkdirs()}}  they will throw 
either {{FileAlreadyExistsException}} or {{ParentNotDirectoryException}}. 
Swallowing a {{FileAlreadyExistsException}} can hide a serious problem -which 
is why MAPREDUCE-5264 is needed.

regarding this patch, the interface definition needs to retain the fact that an 
FAE can be thrown, but that the reason for doing so is the same as for 
{{ParentNotDirectoryException}}.



> LocalFileContext does not throw an exception on mkdir for already existing 
> directory
> 
>
> Key: HADOOP-9438
> URL: https://issues.apache.org/jira/browse/HADOOP-9438
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: HADOOP-9438.20130501.1.patch, 
> HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch
>
>
> according to 
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
> should throw a FileAlreadyExistsException if the directory already exists.
> I tested this and 
> {code}
> FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
> Path p = new Path("/tmp/bobby.12345");
> FsPermission cachePerms = new FsPermission((short) 0755);
> lfc.mkdir(p, cachePerms, false);
> lfc.mkdir(p, cachePerms, false);
> {code}
> never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9446) Support Kerberos HTTP SPNEGO authentication for non-SUN JDK

2013-07-10 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704389#comment-13704389
 ] 

Yu Li commented on HADOOP-9446:
---

Hi Yu Gao,

For the branch-2 patch, you have missed definition of the "ibmJava" variable 
and will cause compile error, can you resolve and update the patch? Thanks.

> Support Kerberos HTTP SPNEGO authentication for non-SUN JDK
> ---
>
> Key: HADOOP-9446
> URL: https://issues.apache.org/jira/browse/HADOOP-9446
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.1.1, 2.0.2-alpha
>Reporter: Yu Gao
>Assignee: Yu Gao
> Attachments: HADOOP-9446-branch-1.patch, HADOOP-9446-branch-2.patch, 
> HADOOP-9446.patch, TestKerberosHttpSPNEGO.java
>
>
> Class KerberosAuthenticator and KerberosAuthenticationHandler currently only 
> support running with SUN JDK when Kerberos is enabled. In order to support  
> alternative JDKs like IBM JDK which has different options supported by 
> Krb5LoginModule and different login module classes, the HTTP Kerberos 
> authentication classes need to be changed.
> In addition, NT_GSS_KRB5_PRINCIPAL, which is used in KerberosAuthenticator to 
> get the corresponding oid instance, is a field defined in SUN JDK, but not in 
> IBM JDK.
> This JIRA is to fix the existing problems and add support for Kerberos HTTP 
> SPNEGO authentication with non-SUN JDK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8440) HarFileSystem.decodeHarURI fails for URIs whose host contains numbers

2013-07-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704420#comment-13704420
 ] 

Hudson commented on HADOOP-8440:


Integrated in Hadoop-Yarn-trunk #266 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/266/])
HADOOP-8440. HarFileSystem.decodeHarURI fails for URIs whose host contains 
numbers. Contributed by Ivan Mitic. (Revision 1501424)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501424
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java


> HarFileSystem.decodeHarURI fails for URIs whose host contains numbers
> -
>
> Key: HADOOP-8440
> URL: https://issues.apache.org/jira/browse/HADOOP-8440
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.0.0, 3.0.0, 2.1.0-beta
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
>Priority: Minor
> Fix For: 3.0.0, 1-win, 2.1.0-beta
>
> Attachments: HADOOP-8440-2-branch-1-win.patch, 
> HADOOP-8440-branch-1-win.2.patch, HADOOP-8440-branch-1-win.patch, 
> HADOOP-8440-branch-1-win.patch, HADOOP-8440-trunk.patch, 
> HADOOP-8440-trunk.patch
>
>
> For example, HarFileSystem.decodeHarURI will fail for the following URI:
> har://hdfs-127.0.0.1:51040/user

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9691) RPC clients can generate call ID using AtomicInteger instead of synchronizing on the Client instance.

2013-07-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704418#comment-13704418
 ] 

Hudson commented on HADOOP-9691:


Integrated in Hadoop-Yarn-trunk #266 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/266/])
HADOOP-9691. RPC clients can generate call ID using AtomicInteger instead 
of synchronizing on the Client instance. Contributed by Chris Nauroth. 
(Revision 1501615)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501615
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcConstants.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java


> RPC clients can generate call ID using AtomicInteger instead of synchronizing 
> on the Client instance.
> -
>
> Key: HADOOP-9691
> URL: https://issues.apache.org/jira/browse/HADOOP-9691
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-9691.1.patch, HADOOP-9691.2.patch, 
> HADOOP-9691.3.patch
>
>
> As noted in discussion on HADOOP-9688, we can optimize generation of call ID 
> in the RPC client code.  Currently, it synchronizes on the {{Client}} 
> instance to coordinate access to a shared {{int}}.  We can switch this to 
> {{AtomicInteger}} to avoid lock contention.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9707) Fix register lists for crc32c inline assembly

2013-07-10 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704523#comment-13704523
 ] 

Kihwal Lee commented on HADOOP-9707:


Sorry I thought I posted +1 almong with my findings yesterday. 

+1 for the patch. 

I also looked at the documentation Brian quoteded. I tried many things to make 
compilers generate a problematic RTL, but was unsuccessful until the loop 
unrolling was enabled. I could clearly see the defect in there. 

Thanks for finding and providing the fix. 


> Fix register lists for crc32c inline assembly
> -
>
> Key: HADOOP-9707
> URL: https://issues.apache.org/jira/browse/HADOOP-9707
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hadoop-9707.txt
>
>
> The inline assembly used for the crc32 instructions has an incorrect clobber 
> list: the computed CRC values are "in-out" variables and thus need to use the 
> "matching constraint" syntax in the clobber list.
> This doesn't seem to cause a problem now in Hadoop, but may break in a 
> different compiler version which allocates registers differently, or may 
> break when the same code is used in another context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8440) HarFileSystem.decodeHarURI fails for URIs whose host contains numbers

2013-07-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704529#comment-13704529
 ] 

Hudson commented on HADOOP-8440:


Integrated in Hadoop-Hdfs-trunk #1456 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1456/])
HADOOP-8440. HarFileSystem.decodeHarURI fails for URIs whose host contains 
numbers. Contributed by Ivan Mitic. (Revision 1501424)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501424
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java


> HarFileSystem.decodeHarURI fails for URIs whose host contains numbers
> -
>
> Key: HADOOP-8440
> URL: https://issues.apache.org/jira/browse/HADOOP-8440
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.0.0, 3.0.0, 2.1.0-beta
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
>Priority: Minor
> Fix For: 3.0.0, 1-win, 2.1.0-beta
>
> Attachments: HADOOP-8440-2-branch-1-win.patch, 
> HADOOP-8440-branch-1-win.2.patch, HADOOP-8440-branch-1-win.patch, 
> HADOOP-8440-branch-1-win.patch, HADOOP-8440-trunk.patch, 
> HADOOP-8440-trunk.patch
>
>
> For example, HarFileSystem.decodeHarURI will fail for the following URI:
> har://hdfs-127.0.0.1:51040/user

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9691) RPC clients can generate call ID using AtomicInteger instead of synchronizing on the Client instance.

2013-07-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704527#comment-13704527
 ] 

Hudson commented on HADOOP-9691:


Integrated in Hadoop-Hdfs-trunk #1456 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1456/])
HADOOP-9691. RPC clients can generate call ID using AtomicInteger instead 
of synchronizing on the Client instance. Contributed by Chris Nauroth. 
(Revision 1501615)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501615
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcConstants.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java


> RPC clients can generate call ID using AtomicInteger instead of synchronizing 
> on the Client instance.
> -
>
> Key: HADOOP-9691
> URL: https://issues.apache.org/jira/browse/HADOOP-9691
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-9691.1.patch, HADOOP-9691.2.patch, 
> HADOOP-9691.3.patch
>
>
> As noted in discussion on HADOOP-9688, we can optimize generation of call ID 
> in the RPC client code.  Currently, it synchronizes on the {{Client}} 
> instance to coordinate access to a shared {{int}}.  We can switch this to 
> {{AtomicInteger}} to avoid lock contention.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8440) HarFileSystem.decodeHarURI fails for URIs whose host contains numbers

2013-07-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704542#comment-13704542
 ] 

Hudson commented on HADOOP-8440:


Integrated in Hadoop-Mapreduce-trunk #1483 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1483/])
HADOOP-8440. HarFileSystem.decodeHarURI fails for URIs whose host contains 
numbers. Contributed by Ivan Mitic. (Revision 1501424)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501424
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java


> HarFileSystem.decodeHarURI fails for URIs whose host contains numbers
> -
>
> Key: HADOOP-8440
> URL: https://issues.apache.org/jira/browse/HADOOP-8440
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.0.0, 3.0.0, 2.1.0-beta
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
>Priority: Minor
> Fix For: 3.0.0, 1-win, 2.1.0-beta
>
> Attachments: HADOOP-8440-2-branch-1-win.patch, 
> HADOOP-8440-branch-1-win.2.patch, HADOOP-8440-branch-1-win.patch, 
> HADOOP-8440-branch-1-win.patch, HADOOP-8440-trunk.patch, 
> HADOOP-8440-trunk.patch
>
>
> For example, HarFileSystem.decodeHarURI will fail for the following URI:
> har://hdfs-127.0.0.1:51040/user

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9691) RPC clients can generate call ID using AtomicInteger instead of synchronizing on the Client instance.

2013-07-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704540#comment-13704540
 ] 

Hudson commented on HADOOP-9691:


Integrated in Hadoop-Mapreduce-trunk #1483 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1483/])
HADOOP-9691. RPC clients can generate call ID using AtomicInteger instead 
of synchronizing on the Client instance. Contributed by Chris Nauroth. 
(Revision 1501615)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501615
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcConstants.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java


> RPC clients can generate call ID using AtomicInteger instead of synchronizing 
> on the Client instance.
> -
>
> Key: HADOOP-9691
> URL: https://issues.apache.org/jira/browse/HADOOP-9691
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-9691.1.patch, HADOOP-9691.2.patch, 
> HADOOP-9691.3.patch
>
>
> As noted in discussion on HADOOP-9688, we can optimize generation of call ID 
> in the RPC client code.  Currently, it synchronizes on the {{Client}} 
> instance to coordinate access to a shared {{int}}.  We can switch this to 
> {{AtomicInteger}} to avoid lock contention.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2013-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Attachment: HADOOP-9361-004.patch

> Strictly define the expected behavior of filesystem APIs and write tests to 
> verify compliance
> -
>
> Key: HADOOP-9361
> URL: https://issues.apache.org/jira/browse/HADOOP-9361
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
> HADOOP-9361-003.patch, HADOOP-9361-004.patch
>
>
> {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
> HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
> don't.
> The only tests that are common are those of {{FileSystemContractTestBase}}, 
> which HADOOP-9258 shows is incomplete.
> I propose 
> # writing more tests which clarify expected behavior
> # testing operations in the interface being in their own JUnit4 test classes, 
> instead of one big test suite. 
> # Having each FS declare via a properties file what behaviors they offer, 
> such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
> methods can downgrade to skipped test cases if a feature is missing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2013-07-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704636#comment-13704636
 ] 

Steve Loughran commented on HADOOP-9361:


The latest patch now has tests for : create, open, delete, mkdir and seek. I'm 
ignoring the rename tests as I need to fully understand what HADOOP-6240 has 
defined first.

h3. seek
# I've been through the code and fixed wherever a -ve seek was either ignored 
or raised an {{IOException}} into an {{EOFException}}. This
included changes to {{ChecksumFileSystem}}, {{RawLocalFileSystem}}, 
{{BufferedFSInputStream}} (which also handles a null inner stream without 
NPEing), {{FSInputChecker.java}}
# pulled in the test from  HADOOP-9307 to do many random seeks and reads; the 
#of seeks is configurable, so that remote blobstore tests don't take forever 
unless you want it to (or are running them in-cluster)
# some filesystems let you seek over a closed stream. I've fixed the NPE in 
{{BufferedFSInputStream}}, not sure it is worth the
effort of fixing this everywhere.

h3. NativeS3 issues/changes changes
* {{Jets3tNativeFileSystemStore}} converts the relevant S3 error code 
{{"InvalidRange"}} into an EOFException
* Amazon S3 rejects a seek(0) in a zero-byte file; not fixed yet as you need to 
know the file length to do it up front. Maybe an EOFException on a seek could 
be downgraded to a no-op if the seek offset is 0.
* throws a {{FileAlreadyExistsException}} if trying to create a file over an 
existing one, and {{!overwrite}}
* I'm deliberately skipping the test where we expect creating a file over a dir 
to fail even if overwrite is true, because blobstores use 0-byte files as a 
pretend directory. 
* It's failing a test which creates overwrites a directory which has children. 
This could be picked up (look for children if overwriting a 0-byte file)
* It fails a test that a newly created file exists while the write is still in 
progress; as the blobstores only write at the end of the file, it doesn't. this 
is potentially a race condition -we could create a marker file here and 
overwrite it on the close.

h3. FTP
I'll cover that in in HADOOP-9712 as its mostly bugs in a niche FS.

h3. LocalFS

* throws {{FileNotFoundException}} when attempting to create a dir where the 
destination or a parent is a directory. This happens inside the JDK and has to 
be a WONTFIX, unless it is caught and wrapped.
{code}
testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.contract.localfs.TestLocalCreateContract)
  Time elapsed: 38 sec  <<< ERROR!
java.io.FileNotFoundException: 
/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/testOverwriteNonEmptyDirectory
 (File exists)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:194)
at 
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:227)
at 
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:223)
at 
org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:286)
at 
org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:273)
at 
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:384)
at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:443)
at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:888)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:869)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:130)
at 
org.apache.hadoop.fs.contract.AbstractCreateContractTest.testOverwriteNonEmptyDirectory(AbstractCreateContractTest.java:115)
{code}

# if you call {{mkdir(path-to-a-file)}} you get a 0 return code -but no 
exception is thrown. This is inconsistent with 
HDFS.
{code}
testNoMkdirOverFile(org.apache.hadoop.fs.contract.localfs.TestLocalDirectoryContract)
  Time elapsed: 46 sec  <<< FAILURE!
java.lang.AssertionError: mkdirs succeeded over a file: ls 
file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/testNoMkdirOverFile[00]
 
RawLocalFileStatus{path=file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/testNoMkdirOverFile;
 isDirectory=false; length=1024; replication=1; blocksize=33554432; 
modification_time=1373457007000; access_time=0; owner=; group=; 
permission=rw-rw-rw-; isSymlink=false}

at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.hadoop.fs.contract.AbstractDirectoryContractTest.testNoMkdirOverFile(AbstractDirectoryContractTest.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.Nati

[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2013-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Attachment: (was: HADOOP-9361-004.patch)

> Strictly define the expected behavior of filesystem APIs and write tests to 
> verify compliance
> -
>
> Key: HADOOP-9361
> URL: https://issues.apache.org/jira/browse/HADOOP-9361
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
> HADOOP-9361-003.patch
>
>
> {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
> HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
> don't.
> The only tests that are common are those of {{FileSystemContractTestBase}}, 
> which HADOOP-9258 shows is incomplete.
> I propose 
> # writing more tests which clarify expected behavior
> # testing operations in the interface being in their own JUnit4 test classes, 
> instead of one big test suite. 
> # Having each FS declare via a properties file what behaviors they offer, 
> such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
> methods can downgrade to skipped test cases if a feature is missing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2013-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Attachment: HADOOP-9361-004.patch

> Strictly define the expected behavior of filesystem APIs and write tests to 
> verify compliance
> -
>
> Key: HADOOP-9361
> URL: https://issues.apache.org/jira/browse/HADOOP-9361
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
> HADOOP-9361-003.patch, HADOOP-9361-004.patch
>
>
> {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
> HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
> don't.
> The only tests that are common are those of {{FileSystemContractTestBase}}, 
> which HADOOP-9258 shows is incomplete.
> I propose 
> # writing more tests which clarify expected behavior
> # testing operations in the interface being in their own JUnit4 test classes, 
> instead of one big test suite. 
> # Having each FS declare via a properties file what behaviors they offer, 
> such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
> methods can downgrade to skipped test cases if a feature is missing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2013-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Status: Patch Available  (was: Open)

patch which contains the tests, though the ftp and s3n tests won't run unless 
test filesystems are provided, only local and HDFS, which will show up some of 
the ambiguities

> Strictly define the expected behavior of filesystem APIs and write tests to 
> verify compliance
> -
>
> Key: HADOOP-9361
> URL: https://issues.apache.org/jira/browse/HADOOP-9361
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
> HADOOP-9361-003.patch, HADOOP-9361-004.patch
>
>
> {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
> HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
> don't.
> The only tests that are common are those of {{FileSystemContractTestBase}}, 
> which HADOOP-9258 shows is incomplete.
> I propose 
> # writing more tests which clarify expected behavior
> # testing operations in the interface being in their own JUnit4 test classes, 
> instead of one big test suite. 
> # Having each FS declare via a properties file what behaviors they offer, 
> such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
> methods can downgrade to skipped test cases if a feature is missing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9712) Write contract tests for FTP filesystem, fix places where it breaks

2013-07-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704641#comment-13704641
 ] 

Steve Loughran commented on HADOOP-9712:


the parent -004 patch contains tests for the FTP client, this JIRA will look at 
the problems thrown up


h4. Changes in the latest patch

* connection refusals are wrapped via the NetUtils
* if you can't log in, the username is included in the exception
* not found => {{FileNotFoundException}}
* file found in a {{mkdir()}} => {{ParentNotDirectoryException}}
* {{FTPFileSystem.exists()}} downgraded IOExceptions to {{FTPException}} which 
extended {{RuntimeException}}. This is potentially dangerous as it could stop 
code that expects failures to be represented as IOException from catching it. 
Now just rethrowing
so that problems don't get hidden.

h4. Bugs

* rename doesn't appear to work, even within the same dir (it explicitly 
doesn't handle renames in other dirs). Maybe the
whole operation should be marked as unsupported.

* throws FileNotFoundException when trying to delete a path that didn't exist
{code}
Running org.apache.hadoop.fs.contract.ftp.TestFTPDeleteContract
Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.793 sec <<< 
FAILURE!
testDeleteNonexistentFileRecursive(org.apache.hadoop.fs.contract.ftp.TestFTPDeleteContract)
  Time elapsed: 430 sec  <<< ERROR!
java.io.FileNotFoundException: File 
ftp:/linuxvm/home/stevel/test/testDeleteEmptyDirRecursive does not exist.
at 
org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:434)
at org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPFileSystem.java:317)
at org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPFileSystem.java:294)
at 
org.apache.hadoop.fs.contract.AbstractDeleteContractTest.testDeleteNonexistentFileRecursive(AbstractDeleteContractTest.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
  {code}
  
This is easy to fix; I just wanted to note its existence.

h4. FTP Ambiguities

* throws a simple IOE when trying to {{create}} over a non-empty directory and 
overwrite==true. HDFS throws a {{FileAlreadyExistsException}}, which I propose 
mimicing.
{code}
testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.contract.ftp.TestFTPCreateContract)
  Time elapsed: 1027 sec  <<< ERROR!java.io.IOException: Directory: 
ftp:/ubuntu/home/stevel/test/testOverwriteNonEmptyDirectory is not empty.
at org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPFileSystem.java:323)
at org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPFileSystem.java:304)
at org.apache.hadoop.fs.ftp.FTPFileSystem.create(FTPFileSystem.java:224)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:888)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:869)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:130)
at 
org.apache.hadoop.fs.contract.AbstractCreateContractTest.testOverwriteNonEmptyDirectory(AbstractCreateContractTest.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
{code}


> Write contract tests for FTP filesystem, fix places where it breaks
> ---
>
> Key: HADOOP-9712
> URL: https://issues.apache.org/jira/browse/HADOOP-9712
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 1.2.0, 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Priority: Minor
>
> implement the abstract contract tests for S3, identify where it is failing to 
> meet expectations and, where possible, fix. 
> FTPFS appears to be the least tested (& presumably used) hadoop filesystem 
> implementation; there may be some bug reports that have been around for years 
> that could drive test

[jira] [Commented] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2013-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704764#comment-13704764
 ] 

Hadoop QA commented on HADOOP-9361:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591663/HADOOP-9361-004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 52 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1154 javac 
compiler warnings (more than the trunk's current 1153 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.contract.mock.TestMockFSContract
  org.apache.hadoop.fs.contract.localfs.TestLocalRenameContract
  org.apache.hadoop.fs.TestLocalFileSystem
  org.apache.hadoop.fs.contract.localfs.TestLocalSeekContract
  org.apache.hadoop.fs.contract.localfs.TestLocalMkdirContract
  org.apache.hadoop.fs.contract.hdfs.TestHDFSMkdirContract
  org.apache.hadoop.fs.contract.hdfs.TestHDFSRenameContract
  
org.apache.hadoop.fs.contract.hdfs.TestHDFSRootDirectoryContract

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2759//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2759//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2759//console

This message is automatically generated.

> Strictly define the expected behavior of filesystem APIs and write tests to 
> verify compliance
> -
>
> Key: HADOOP-9361
> URL: https://issues.apache.org/jira/browse/HADOOP-9361
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
> HADOOP-9361-003.patch, HADOOP-9361-004.patch
>
>
> {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
> HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
> don't.
> The only tests that are common are those of {{FileSystemContractTestBase}}, 
> which HADOOP-9258 shows is incomplete.
> I propose 
> # writing more tests which clarify expected behavior
> # testing operations in the interface being in their own JUnit4 test classes, 
> instead of one big test suite. 
> # Having each FS declare via a properties file what behaviors they offer, 
> such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
> methods can downgrade to skipped test cases if a feature is missing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704802#comment-13704802
 ] 

Akira AJISAKA commented on HADOOP-8873:
---

Sorry, I understand mkdir fails if the directory that you are creating exists 
on 1.2.
Do you mean only HADOOP-8175 should be backported?

> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9712) Write contract tests for FTP filesystem, fix places where it breaks

2013-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9712:
---

Attachment: HADOOP-9712-001.patch

Patch containing purely the changes to the ftp FS source
* make exceptions raised consistent with HDFS
* don't allow overwrites of directories with files
* stop {{delete()}} throwing {{FileNotFoundException()}}.

Apart from renaming issues, the (new) contract tests are working.

> Write contract tests for FTP filesystem, fix places where it breaks
> ---
>
> Key: HADOOP-9712
> URL: https://issues.apache.org/jira/browse/HADOOP-9712
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 1.2.0, 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-9712-001.patch
>
>
> implement the abstract contract tests for S3, identify where it is failing to 
> meet expectations and, where possible, fix. 
> FTPFS appears to be the least tested (& presumably used) hadoop filesystem 
> implementation; there may be some bug reports that have been around for years 
> that could drive test cases and fixes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9712) Write contract tests for FTP filesystem, fix places where it breaks

2013-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9712:
---

Status: Patch Available  (was: Open)

Tests are in the HADOOP-9361, will only run if you define the FS url and ftp 
path
{code}
  
fs.ftp.contract.test.fs.name
ftp://linuxvm/
  

  
fs.ftp.contract.test.testdir
/home/stevel/test
  
{code}

BTW, as a sanity check, the contract-driven tests refuse to play if the 
specified FS is not FTP (more precisely, the FS URI schema matches that of the 
contract. This is to stop rm -rf tests on the local fs happening by accident

> Write contract tests for FTP filesystem, fix places where it breaks
> ---
>
> Key: HADOOP-9712
> URL: https://issues.apache.org/jira/browse/HADOOP-9712
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 1.2.0, 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-9712-001.patch
>
>
> implement the abstract contract tests for S3, identify where it is failing to 
> meet expectations and, where possible, fix. 
> FTPFS appears to be the least tested (& presumably used) hadoop filesystem 
> implementation; there may be some bug reports that have been around for years 
> that could drive test cases and fixes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9718:
---

 Summary: Branch-1-win TestGroupFallback#testGroupWithFallback() 
failed caused by java.lang.UnsatisfiedLinkError
 Key: HADOOP-9718
 URL: https://issues.apache.org/jira/browse/HADOOP-9718
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win


Here is the error information:
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
at org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
Method)
at 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
at 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
at 
org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9718 started by Xi Fang.

> Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by 
> java.lang.UnsatisfiedLinkError
> --
>
> Key: HADOOP-9718
> URL: https://issues.apache.org/jira/browse/HADOOP-9718
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
> Environment: Windows
>Reporter: Xi Fang
>Assignee: Xi Fang
> Fix For: 1-win
>
> Attachments: HADOOP-9718.patch
>
>
> Here is the error information:
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> java.lang.UnsatisfiedLinkError: 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
> Method)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
> at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
> at 
> org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
> This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9718:


Attachment: HADOOP-9718.patch

> Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by 
> java.lang.UnsatisfiedLinkError
> --
>
> Key: HADOOP-9718
> URL: https://issues.apache.org/jira/browse/HADOOP-9718
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
> Environment: Windows
>Reporter: Xi Fang
>Assignee: Xi Fang
> Fix For: 1-win
>
> Attachments: HADOOP-9718.patch
>
>
> Here is the error information:
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> java.lang.UnsatisfiedLinkError: 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
> Method)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
> at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
> at 
> org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
> This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704831#comment-13704831
 ] 

Xi Fang commented on HADOOP-9718:
-

Backporting https://issues.apache.org/jira/browse/HADOOP-9232

> Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by 
> java.lang.UnsatisfiedLinkError
> --
>
> Key: HADOOP-9718
> URL: https://issues.apache.org/jira/browse/HADOOP-9718
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
> Environment: Windows
>Reporter: Xi Fang
>Assignee: Xi Fang
> Fix For: 1-win
>
> Attachments: HADOOP-9718.patch
>
>
> Here is the error information:
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> java.lang.UnsatisfiedLinkError: 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
> Method)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
> at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
> at 
> org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
> This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9703) org.apache.hadoop.ipc.Client leaks threads on stop.

2013-07-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704840#comment-13704840
 ] 

Colin Patrick McCabe commented on HADOOP-9703:
--

Thanks for tackling this JIRA.  The logic looks OK.

I think it would be better to create a static object which handles all of this 
for you.

For example you could have 
{code}
private static final ClientExecutorServiceFactory = 
new ClientExecutorServiceFactory();

private static class ClientExecutorServiceFactory {
synchronized ExecutorService ref() { ... }
synchronized ExecutorService unref() { ... }
};
{code}

ref manages incrementing the reference count and unref manages decrementing it.

This avoids the findbugs warning, and avoids having to document what locks have 
to be taken where (because ClientExecutorServiceFactory handles that for you)

> org.apache.hadoop.ipc.Client leaks threads on stop.
> ---
>
> Key: HADOOP-9703
> URL: https://issues.apache.org/jira/browse/HADOOP-9703
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Tsuyoshi OZAWA
>Priority: Minor
> Attachments: HADOOP-9703.1.patch
>
>
> org.apache.hadoop.ipc.Client#stop says "Stop all threads related to this 
> client." but does not shutdown the static SEND_PARAMS_EXECUTOR, so usage of 
> this class always leaks threads rather than cleanly closing or shutting down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-9718.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

+1 for the patch.  I tested it successfully on Mac and Windows.  I committed 
this to branch-1-win.  Thank you for the contribution, Xi.

> Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by 
> java.lang.UnsatisfiedLinkError
> --
>
> Key: HADOOP-9718
> URL: https://issues.apache.org/jira/browse/HADOOP-9718
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1-win
> Environment: Windows
>Reporter: Xi Fang
>Assignee: Xi Fang
> Fix For: 1-win
>
> Attachments: HADOOP-9718.patch
>
>
> Here is the error information:
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> java.lang.UnsatisfiedLinkError: 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
> Method)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
> at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
> at 
> org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
> This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9718:
--

 Component/s: security
Target Version/s: 1-win

> Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by 
> java.lang.UnsatisfiedLinkError
> --
>
> Key: HADOOP-9718
> URL: https://issues.apache.org/jira/browse/HADOOP-9718
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1-win
> Environment: Windows
>Reporter: Xi Fang
>Assignee: Xi Fang
> Fix For: 1-win
>
> Attachments: HADOOP-9718.patch
>
>
> Here is the error information:
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> java.lang.UnsatisfiedLinkError: 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
> Method)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
> at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
> at 
> org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
> This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9371) Define Semantics of FileSystem and FileContext more rigorously

2013-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9371:
---

Attachment: HADOOP-9371-003.patch

Patch with the document in .apt format.

I'm trying to move away from must/may/should to a more formal definition 
-essentially using set theory- I'd like feedback on this approach

# we need a good syntax, I've used Standard ML as the rough basis for this, but 
not perfectly.
# I'm not handling concurrency in the formal bits at all; that's a different 
level of formal logics that I don't want to go near -even if I was confident I 
could use them.
# I'm working on the core operations: create, open, delete, & first.
# Rename is more complex than this, I need to go through all the relevant JIRAs 
as well as the code.
# mkdir is surprising too -there are some inconsistencies between local & hdfs 
that I need to understand better. It looks like hdfs always returns true "there 
is a directory", while local returns true iff the directory was created.
# Permissions need to be defined as well, because of things like "What should 
the permissions be up the dir tree when I call {{create(path)}} and its parents 
don't exist?"

Comments welcome

> Define Semantics of FileSystem and FileContext more rigorously
> --
>
> Key: HADOOP-9371
> URL: https://issues.apache.org/jira/browse/HADOOP-9371
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 1.2.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361.2.patch, HADOOP-9361.patch, 
> HADOOP-9371-003.patch, HadoopFilesystemContract.pdf
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> The semantics of {{FileSystem}} and {{FileContext}} are not completely 
> defined in terms of 
> # core expectations of a filesystem
> # consistency requirements.
> # concurrency requirements.
> # minimum scale limits
> Furthermore, methods are not defined strictly enough in terms of their 
> outcomes and failure modes.
> The requirements and method semantics should be defined more strictly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9712) Write contract tests for FTP filesystem, fix places where it breaks

2013-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704892#comment-13704892
 ] 

Hadoop QA commented on HADOOP-9712:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591686/HADOOP-9712-001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2760//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2760//console

This message is automatically generated.

> Write contract tests for FTP filesystem, fix places where it breaks
> ---
>
> Key: HADOOP-9712
> URL: https://issues.apache.org/jira/browse/HADOOP-9712
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 1.2.0, 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-9712-001.patch
>
>
> implement the abstract contract tests for S3, identify where it is failing to 
> meet expectations and, where possible, fix. 
> FTPFS appears to be the least tested (& presumably used) hadoop filesystem 
> implementation; there may be some bug reports that have been around for years 
> that could drive test cases and fixes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704889#comment-13704889
 ] 

Xi Fang commented on HADOOP-9718:
-

Thanks Chris!

> Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by 
> java.lang.UnsatisfiedLinkError
> --
>
> Key: HADOOP-9718
> URL: https://issues.apache.org/jira/browse/HADOOP-9718
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1-win
> Environment: Windows
>Reporter: Xi Fang
>Assignee: Xi Fang
> Fix For: 1-win
>
> Attachments: HADOOP-9718.patch
>
>
> Here is the error information:
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> java.lang.UnsatisfiedLinkError: 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
> Method)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
> at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
> at 
> org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
> This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9711) Write contract tests for S3Native; fix places where it breaks

2013-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9711:
---

Attachment: HADOOP-9711-004.patch

This is the fraction of the '9361-004 patch to S3N to make its exception codes 
consistent with HDFS.

It still behaves differently, because it can't distinguish files from empty 
directories, though maybe at the least we could stop files being created above 
non-empty dirs.

A seek to 0 in a 0 byte file fails

> Write contract tests for S3Native; fix places where it breaks
> -
>
> Key: HADOOP-9711
> URL: https://issues.apache.org/jira/browse/HADOOP-9711
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 1.2.0, 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-9711-004.patch
>
>
> implement the abstract contract tests for S3, identify where it is failing to 
> meet expectations and, where possible, fix. Blobstores tend to treat 0 byte 
> files as directories, so tests overwriting files with dirs and vice versa may 
> fail and have to be skipped

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7352) Contracts of LocalFileSystem and DistributedFileSystem should require FileSystem::listStatus throw IOException not return null upon access error

2013-07-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704912#comment-13704912
 ] 

Steve Loughran commented on HADOOP-7352:


I want to revisit this as part of the FS contract spec/changes. Does throwing 
an IOE still seem the right approach. And is there any test for this yet?

> Contracts of LocalFileSystem and DistributedFileSystem should require 
> FileSystem::listStatus throw IOException not return null upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3
>Reporter: Matt Foley
>Assignee: Matt Foley
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-10 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704976#comment-13704976
 ] 

Konstantin Shvachko commented on HADOOP-9688:
-

Sorry for coming late to this.
The idea to use ClientId + CallId as a unique combination of the call is 
absolutely right.
= One question if the ClientId should be on the RPC client or DFSClient. Now 
both have an id, which makes one of them redundant.
= randomUUID is not unique. Even though as Chris commented the probability of a 
collision is low there are ways to generate really unique clientIds, say as we 
generate storageIDs for DataNodes. Should we target that?
= The naming of the new field could be confusing. You call it uuid in the 
Client and clientId in other places, if I understood it correctly.

> Add globally unique Client ID to RPC requests
> -
>
> Key: HADOOP-9688
> URL: https://issues.apache.org/jira/browse/HADOOP-9688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-9688.clientId.1.patch, 
> HADOOP-9688.clientId.patch, HADOOP-9688.patch
>
>
> This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
> ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9719:
---

 Summary: Branch-1-win TestFsShellReturnCode#testChgrp() failed 
caused by incorrect exit codes
 Key: HADOOP-9719
 URL: https://issues.apache.org/jira/browse/HADOOP-9719
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win


TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to change 
group association of files to "admin".
// Test 1: exit code for chgrp on existing file is 0
String argv[] = { "-chgrp", "admin", f1 };
verify(fs, "-chgrp", argv, 1, fsShell, 0);
.
On Windows, this is the error information:
org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
(1332): No mapping between account names and security IDs was done.
Invalid group name: admin
This test case passed previously, but it looks like this test case incorrectly 
passed because of another bug in FsShell@runCmdHandler 
(https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
FsShell#runCmdHandler may not return error exit codes for some exceptions (see 
private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
original Branch-1-win if even if admin is not a valid group, there is no error 
caught. The fix of HADOOP-9502 makes this test fail.

This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9719 started by Xi Fang.

> Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
> exit codes
> 
>
> Key: HADOOP-9719
> URL: https://issues.apache.org/jira/browse/HADOOP-9719
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Xi Fang
>Assignee: Xi Fang
>Priority: Minor
>  Labels: test
> Fix For: 1-win
>
>
> TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to 
> change group association of files to "admin".
> // Test 1: exit code for chgrp on existing file is 0
> String argv[] = { "-chgrp", "admin", f1 };
> verify(fs, "-chgrp", argv, 1, fsShell, 0);
> .
> On Windows, this is the error information:
> org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
> (1332): No mapping between account names and security IDs was done.
> Invalid group name: admin
> This test case passed previously, but it looks like this test case 
> incorrectly passed because of another bug in FsShell@runCmdHandler 
> (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
> FsShell#runCmdHandler may not return error exit codes for some exceptions 
> (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
> FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
> original Branch-1-win if even if admin is not a valid group, there is no 
> error caught. The fix of HADOOP-9502 makes this test fail.
> This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9719:


Attachment: HADOOP-9719.patch

> Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
> exit codes
> 
>
> Key: HADOOP-9719
> URL: https://issues.apache.org/jira/browse/HADOOP-9719
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Xi Fang
>Assignee: Xi Fang
>Priority: Minor
>  Labels: test
> Fix For: 1-win
>
> Attachments: HADOOP-9719.patch
>
>
> TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to 
> change group association of files to "admin".
> // Test 1: exit code for chgrp on existing file is 0
> String argv[] = { "-chgrp", "admin", f1 };
> verify(fs, "-chgrp", argv, 1, fsShell, 0);
> .
> On Windows, this is the error information:
> org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
> (1332): No mapping between account names and security IDs was done.
> Invalid group name: admin
> This test case passed previously, but it looks like this test case 
> incorrectly passed because of another bug in FsShell@runCmdHandler 
> (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
> FsShell#runCmdHandler may not return error exit codes for some exceptions 
> (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
> FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
> original Branch-1-win if even if admin is not a valid group, there is no 
> error caught. The fix of HADOOP-9502 makes this test fail.
> This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9719:


Description: 
TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to change 
group association of files to "admin".
// Test 1: exit code for chgrp on existing file is 0
String argv[] = { "-chgrp", "admin", f1 };
verify(fs, "-chgrp", argv, 1, fsShell, 0);
.
On Windows, this is the error information:
org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
(1332): No mapping between account names and security IDs was done.
Invalid group name: admin
This test case passed previously, but it looks like this test case incorrectly 
passed because of another bug in FsShell@runCmdHandler 
(https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
FsShell#runCmdHandler may not return error exit codes for some exceptions (see 
private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
previous Branch-1-win if even if admin is not a valid group, there is no error 
caught. The fix of HADOOP-9502 makes this test fail.

This test also failed on Linux

  was:
TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to change 
group association of files to "admin".
// Test 1: exit code for chgrp on existing file is 0
String argv[] = { "-chgrp", "admin", f1 };
verify(fs, "-chgrp", argv, 1, fsShell, 0);
.
On Windows, this is the error information:
org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
(1332): No mapping between account names and security IDs was done.
Invalid group name: admin
This test case passed previously, but it looks like this test case incorrectly 
passed because of another bug in FsShell@runCmdHandler 
(https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
FsShell#runCmdHandler may not return error exit codes for some exceptions (see 
private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
original Branch-1-win if even if admin is not a valid group, there is no error 
caught. The fix of HADOOP-9502 makes this test fail.

This test also failed on Linux


> Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
> exit codes
> 
>
> Key: HADOOP-9719
> URL: https://issues.apache.org/jira/browse/HADOOP-9719
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Xi Fang
>Assignee: Xi Fang
>Priority: Minor
>  Labels: test
> Fix For: 1-win
>
> Attachments: HADOOP-9719.patch
>
>
> TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to 
> change group association of files to "admin".
> // Test 1: exit code for chgrp on existing file is 0
> String argv[] = { "-chgrp", "admin", f1 };
> verify(fs, "-chgrp", argv, 1, fsShell, 0);
> .
> On Windows, this is the error information:
> org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
> (1332): No mapping between account names and security IDs was done.
> Invalid group name: admin
> This test case passed previously, but it looks like this test case 
> incorrectly passed because of another bug in FsShell@runCmdHandler 
> (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
> FsShell#runCmdHandler may not return error exit codes for some exceptions 
> (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
> FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
> previous Branch-1-win if even if admin is not a valid group, there is no 
> error caught. The fix of HADOOP-9502 makes this test fail.
> This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705025#comment-13705025
 ] 

Xi Fang commented on HADOOP-9719:
-

A patch was attached. In testChgrp(), I replaced the hardcoded "admin" by the 
group of the current user. I also found "admin" in testChown() was not correct, 
although the test passed. I also changed that. 

> Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
> exit codes
> 
>
> Key: HADOOP-9719
> URL: https://issues.apache.org/jira/browse/HADOOP-9719
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Xi Fang
>Assignee: Xi Fang
>Priority: Minor
>  Labels: test
> Fix For: 1-win
>
> Attachments: HADOOP-9719.patch
>
>
> TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to 
> change group association of files to "admin".
> // Test 1: exit code for chgrp on existing file is 0
> String argv[] = { "-chgrp", "admin", f1 };
> verify(fs, "-chgrp", argv, 1, fsShell, 0);
> .
> On Windows, this is the error information:
> org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
> (1332): No mapping between account names and security IDs was done.
> Invalid group name: admin
> This test case passed previously, but it looks like this test case 
> incorrectly passed because of another bug in FsShell@runCmdHandler 
> (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
> FsShell#runCmdHandler may not return error exit codes for some exceptions 
> (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
> FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
> previous Branch-1-win if even if admin is not a valid group, there is no 
> error caught. The fix of HADOOP-9502 makes this test fail.
> This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9719:


Description: 
TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to change 
group association of files to "admin".
{code}
// Test 1: exit code for chgrp on existing file is 0
String argv[] = { "-chgrp", "admin", f1 };
verify(fs, "-chgrp", argv, 1, fsShell, 0);
{code}
On Windows, this is the error information:
org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
(1332): No mapping between account names and security IDs was done.
Invalid group name: admin
This test case passed previously, but it looks like this test case incorrectly 
passed because of another bug in FsShell@runCmdHandler 
(https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
FsShell#runCmdHandler may not return error exit codes for some exceptions (see 
private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
previous Branch-1-win if even if admin is not a valid group, there is no error 
caught. The fix of HADOOP-9502 makes this test fail.

This test also failed on Linux

  was:
TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to change 
group association of files to "admin".
// Test 1: exit code for chgrp on existing file is 0
String argv[] = { "-chgrp", "admin", f1 };
verify(fs, "-chgrp", argv, 1, fsShell, 0);
.
On Windows, this is the error information:
org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
(1332): No mapping between account names and security IDs was done.
Invalid group name: admin
This test case passed previously, but it looks like this test case incorrectly 
passed because of another bug in FsShell@runCmdHandler 
(https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
FsShell#runCmdHandler may not return error exit codes for some exceptions (see 
private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
previous Branch-1-win if even if admin is not a valid group, there is no error 
caught. The fix of HADOOP-9502 makes this test fail.

This test also failed on Linux


> Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
> exit codes
> 
>
> Key: HADOOP-9719
> URL: https://issues.apache.org/jira/browse/HADOOP-9719
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Xi Fang
>Assignee: Xi Fang
>Priority: Minor
>  Labels: test
> Fix For: 1-win
>
> Attachments: HADOOP-9719.patch
>
>
> TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to 
> change group association of files to "admin".
> {code}
> // Test 1: exit code for chgrp on existing file is 0
> String argv[] = { "-chgrp", "admin", f1 };
> verify(fs, "-chgrp", argv, 1, fsShell, 0);
> {code}
> On Windows, this is the error information:
> org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
> (1332): No mapping between account names and security IDs was done.
> Invalid group name: admin
> This test case passed previously, but it looks like this test case 
> incorrectly passed because of another bug in FsShell@runCmdHandler 
> (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
> FsShell#runCmdHandler may not return error exit codes for some exceptions 
> (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
> FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
> previous Branch-1-win if even if admin is not a valid group, there is no 
> error caught. The fix of HADOOP-9502 makes this test fail.
> This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705032#comment-13705032
 ] 

Suresh Srinivas commented on HADOOP-9688:
-

[~shv] Thanks for the comments.
bq. One question if the ClientId should be on the RPC client or DFSClient. Now 
both have an id, which makes one of them redundant.
Having it in RPC client avoids collision. If it were in DFSClient, across all 
the clients, call ID needs to be generated such that there is no collision.

bq. randomUUID is not unique. Even though as Chris commented the probability of 
a collision is low there are ways to generate really unique clientIds, say as 
we generate storageIDs for DataNodes. Should we target that?
I think randomUUID collision probability is very low. If not better, it should 
not be worse than storageID. See some links from my previous comment - 
https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13702321&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13702321

Also note that the uniqueness is lot simpler here. The uniqueness we need is 
for the period of retry cache (around 10 minutes). Hence probability of 
collision should be even lower.

bq. The naming of the new field could be confusing. You call it uuid in the 
Client and clientId in other places, if I understood it correctly.
Good idea. I will file another jira to rename the field to client ID. Also 
document the field.



> Add globally unique Client ID to RPC requests
> -
>
> Key: HADOOP-9688
> URL: https://issues.apache.org/jira/browse/HADOOP-9688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-9688.clientId.1.patch, 
> HADOOP-9688.clientId.patch, HADOOP-9688.patch
>
>
> This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
> ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9720) Rename Client#uuid to Client#clientId

2013-07-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9720:


Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-9688

> Rename Client#uuid to Client#clientId
> -
>
> Key: HADOOP-9720
> URL: https://issues.apache.org/jira/browse/HADOOP-9720
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Suresh Srinivas
>
> To address the comment - 
> https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13705032&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13705032

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9720) Rename Client#uuid to Client#clientId

2013-07-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas moved HDFS-4976 to HADOOP-9720:
---

Component/s: (was: ha)
 (was: namenode)
Key: HADOOP-9720  (was: HDFS-4976)
Project: Hadoop Common  (was: Hadoop HDFS)

> Rename Client#uuid to Client#clientId
> -
>
> Key: HADOOP-9720
> URL: https://issues.apache.org/jira/browse/HADOOP-9720
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Suresh Srinivas
>
> To address the comment - 
> https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13705032&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13705032

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705037#comment-13705037
 ] 

Suresh Srinivas commented on HADOOP-9688:
-

HADOOP-9720 created to change the field name from uuid to clientId.

> Add globally unique Client ID to RPC requests
> -
>
> Key: HADOOP-9688
> URL: https://issues.apache.org/jira/browse/HADOOP-9688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-9688.clientId.1.patch, 
> HADOOP-9688.clientId.patch, HADOOP-9688.patch
>
>
> This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
> ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9392) Token based authentication and Single Sign On

2013-07-10 Thread Brian Swan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705038#comment-13705038
 ] 

Brian Swan commented on HADOOP-9392:


Hi Tianyou-

Maybe I should have listed my last comment/question first, as it was the most 
important to me: One work item that fits into your design is that of adding 
token support to RPC endpoints. This is a work item that would add value for 
customers right away while still allowing flexibility in the rest of the 
design. This is something we would like to begin work on now (after consulting 
Daryn Sharp, since I understand he's been doing some work in this area). 
However, it's not clear to me (based on comments in the DISCUSS thread on 
common-dev) if you are already writing code for this. It would be unfortunate 
to duplicate work here. If you have something concrete to share, that would be 
great.

Regarding a client passing credentials to TAS: It seems that you are saying 
that a client would not pass credentials to TAS in all scenarios. This is not 
reflected in the diagram. I also am not sure what you mean by "TAS should be 
trusted by client for authentication". Trusting with *credentials* violates 
basic security principles, which I would not see as an improvement in Hadoop 
security.

IMHO, the best way to get to a common understanding of the details here is with 
code or with a much more narrowly-scoped discussion (which is what I was trying 
to say in my point #3). I *do* think that breaking things down into sub-tasks 
is a good idea - the DISCUSS thread on common-dev that I mentioned before has a 
great start to this (by component).

Thanks.

> Token based authentication and Single Sign On
> -
>
> Key: HADOOP-9392
> URL: https://issues.apache.org/jira/browse/HADOOP-9392
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: token-based-authn-plus-sso.pdf, 
> token-based-authn-plus-sso-v2.0.pdf
>
>
> This is an umbrella entry for one of project Rhino’s topic, for details of 
> project Rhino, please refer to 
> https://github.com/intel-hadoop/project-rhino/. The major goal for this entry 
> as described in project Rhino was 
>  
> “Core, HDFS, ZooKeeper, and HBase currently support Kerberos authentication 
> at the RPC layer, via SASL. However this does not provide valuable attributes 
> such as group membership, classification level, organizational identity, or 
> support for user defined attributes. Hadoop components must interrogate 
> external resources for discovering these attributes and at scale this is 
> problematic. There is also no consistent delegation model. HDFS has a simple 
> delegation capability, and only Oozie can take limited advantage of it. We 
> will implement a common token based authentication framework to decouple 
> internal user and service authentication from external mechanisms used to 
> support it (like Kerberos)”
>  
> We’d like to start our work from Hadoop-Common and try to provide common 
> facilities by extending existing authentication framework which support:
> 1.Pluggable token provider interface 
> 2.Pluggable token verification protocol and interface
> 3.Security mechanism to distribute secrets in cluster nodes
> 4.Delegation model of user authentication

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705046#comment-13705046
 ] 

Suresh Srinivas commented on HADOOP-8873:
-

[~ajisakaa] I took a quick look at HADOOP-8175. The way I understand it is  
(correct me if I am wrong):
# Behavior of mkdir without -p flag remains the same
# mkdir with -p does not fail if the target directory already exists.

Given that this is compatible, we should be able to get this into branch-1.


> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705046#comment-13705046
 ] 

Suresh Srinivas edited comment on HADOOP-8873 at 7/10/13 8:46 PM:
--

[~ajisakaa] I took a quick look at HADOOP-8175. The way I understand it is  
(correct me if I am wrong):
# Behavior of mkdir without -p flag remains the same
# New behavior: adds support for passing -p flag. mkdir with -p does not fail 
if the target directory already exists.

Given that this is compatible, we should be able to get this into branch-1.


  was (Author: sureshms):
[~ajisakaa] I took a quick look at HADOOP-8175. The way I understand it is  
(correct me if I am wrong):
# Behavior of mkdir without -p flag remains the same
# mkdir with -p does not fail if the target directory already exists.

Given that this is compatible, we should be able to get this into branch-1.

  
> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9601) Support native CRC on byte arrays

2013-07-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705051#comment-13705051
 ] 

Colin Patrick McCabe commented on HADOOP-9601:
--

I still think this can be done without macros.  The easiest way is to combine 
the loop body with the "remainder" body.  You probably want something like a 
while (1) and then some if statements to handle the value of i.  If done 
carefully, it will not be slower.  And it will be much, much, more readable.

There is another problem here, which is that you are assuming that you can load 
a uint32_t from an unaligned address (i.e., not a multiple of 4).  There is 
information about alignment here: http://lwn.net/Articles/260832/  Although 
this works on x86, this will fail on ARM, and a lot of other architectures.  
This may be something that needs to be opened as a separate bug, though, since 
I think other users of {{bulk_verify_crc}} are also doing this.

> Support native CRC on byte arrays
> -
>
> Key: HADOOP-9601
> URL: https://issues.apache.org/jira/browse/HADOOP-9601
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Gopal V
>  Labels: perfomance
> Attachments: HADOOP-9601-bench.patch, 
> HADOOP-9601-rebase+benchmark.patch, HADOOP-9601-trunk-rebase-2.patch, 
> HADOOP-9601-trunk-rebase.patch, HADOOP-9601-WIP-01.patch, 
> HADOOP-9601-WIP-02.patch
>
>
> When we first implemented the Native CRC code, we only did so for direct byte 
> buffers, because these correspond directly to native heap memory and thus 
> make it easy to access via JNI. We'd generally assumed that accessing byte[] 
> arrays from JNI was not efficient enough, but now that I know more about JNI 
> I don't think that's true -- we just need to make sure that the critical 
> sections where we lock the buffers are short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9701) mvn site ambiguous links in hadoop-common

2013-07-10 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9701:
-

Summary: mvn site ambiguous links in hadoop-common  (was: mvn site 
generation warning of an ambiguous link in Compatibility.apt)

> mvn site ambiguous links in hadoop-common
> -
>
> Key: HADOOP-9701
> URL: https://issues.apache.org/jira/browse/HADOOP-9701
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Karthik Kambatla
>Priority: Minor
>
> {code}
> [INFO] Rendering site with org.apache.maven.skins:maven-stylus-skin:jar:1.2 
> skin.
> [WARNING] [APT Parser] Ambiguous link: 'InterfaceClassification.html'. If 
> this is a local link, prepend "./"!
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9701) mvn site ambiguous links in hadoop-common

2013-07-10 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9701:
-

Description: 
{code}
[INFO] Rendering site with org.apache.maven.skins:maven-stylus-skin:jar:1.2 
skin.
[WARNING] [APT Parser] Ambiguous link: 'InterfaceClassification.html'. If this 
is a local link, prepend "./"!
{code}

Also, noticed a warning in SingleNodeSetup.apt

  was:
{code}
[INFO] Rendering site with org.apache.maven.skins:maven-stylus-skin:jar:1.2 
skin.
[WARNING] [APT Parser] Ambiguous link: 'InterfaceClassification.html'. If this 
is a local link, prepend "./"!
{code}


> mvn site ambiguous links in hadoop-common
> -
>
> Key: HADOOP-9701
> URL: https://issues.apache.org/jira/browse/HADOOP-9701
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Karthik Kambatla
>Priority: Minor
>
> {code}
> [INFO] Rendering site with org.apache.maven.skins:maven-stylus-skin:jar:1.2 
> skin.
> [WARNING] [APT Parser] Ambiguous link: 'InterfaceClassification.html'. If 
> this is a local link, prepend "./"!
> {code}
> Also, noticed a warning in SingleNodeSetup.apt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-10 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705049#comment-13705049
 ] 

Konstantin Shvachko commented on HADOOP-9688:
-

Should we then eliminate DFSClient.clientName and make 
DFSClient.getClientName() return Client.uuid?

> I think randomUUID collision probability is very low. If not better, it 
> should not be worse than storageID.

storageIDs are unique. There is now probability they will collide as far as I 
remember. I am saying we can do the same for clientID. 
You know in big clusters most improbable events happen all the time.
Yes, I read all the comments.

> Add globally unique Client ID to RPC requests
> -
>
> Key: HADOOP-9688
> URL: https://issues.apache.org/jira/browse/HADOOP-9688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-9688.clientId.1.patch, 
> HADOOP-9688.clientId.patch, HADOOP-9688.patch
>
>
> This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
> ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9701) mvn site ambiguous links in hadoop-common

2013-07-10 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9701:
-

Status: Patch Available  (was: Open)

> mvn site ambiguous links in hadoop-common
> -
>
> Key: HADOOP-9701
> URL: https://issues.apache.org/jira/browse/HADOOP-9701
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9701-1.patch
>
>
> {code}
> [INFO] Rendering site with org.apache.maven.skins:maven-stylus-skin:jar:1.2 
> skin.
> [WARNING] [APT Parser] Ambiguous link: 'InterfaceClassification.html'. If 
> this is a local link, prepend "./"!
> {code}
> Also, noticed a warning in SingleNodeSetup.apt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9701) mvn site ambiguous links in hadoop-common

2013-07-10 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9701:
-

Attachment: hadoop-9701-1.patch

Trivial patch that fixes the links.

> mvn site ambiguous links in hadoop-common
> -
>
> Key: HADOOP-9701
> URL: https://issues.apache.org/jira/browse/HADOOP-9701
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9701-1.patch
>
>
> {code}
> [INFO] Rendering site with org.apache.maven.skins:maven-stylus-skin:jar:1.2 
> skin.
> [WARNING] [APT Parser] Ambiguous link: 'InterfaceClassification.html'. If 
> this is a local link, prepend "./"!
> {code}
> Also, noticed a warning in SingleNodeSetup.apt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705060#comment-13705060
 ] 

Akira AJISAKA commented on HADOOP-8873:
---

[~sureshms] That's right. I'll make a patch for the followings:

{quote}
# Behavior of mkdir without -p flag remains the same
# New behavior: adds support for passing -p flag. mkdir with -p does not fail 
if the target directory already exists.
{quote}

> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705084#comment-13705084
 ] 

Suresh Srinivas commented on HADOOP-8873:
-

[~ajisakaa], in this patch, can you please update the documentation as well. 
Docs for mkdir command is in - 
src/docs/src/documentation/content/xdocs/file_system_shell.xml

> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9698) RPCv9 client must honor server's SASL negotiate response

2013-07-10 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705086#comment-13705086
 ] 

Luke Lu commented on HADOOP-9698:
-

Although, it's desirable to make the mechanism negotiation work, which is a new 
feature in RPC v9, I'm not sure why this would be a blocker, as there is no 
protocol change necessary and that there is no real regression compared to 
earlier versions.

AFAICT, it'd require non-trivial change to the current client to really make 
the negotiation work properly. I see no need to rush the change for 2.2.



> RPCv9 client must honor server's SASL negotiate response
> 
>
> Key: HADOOP-9698
> URL: https://issues.apache.org/jira/browse/HADOOP-9698
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>
> As of HADOOP-9421, a RPCv9 server will advertise its authentication methods.  
> This is meant to support features such as IP failover, better token 
> selection, and interoperability in a heterogenous security environment.
> Currently the client ignores the negotiate response and just blindly attempts 
> to authenticate instead of choosing a mutually agreeable auth method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9720) Rename Client#uuid to Client#clientId

2013-07-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9720:
--

Affects Version/s: 3.0.0

> Rename Client#uuid to Client#clientId
> -
>
> Key: HADOOP-9720
> URL: https://issues.apache.org/jira/browse/HADOOP-9720
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Arpit Agarwal
> Attachments: HADOOP-9720.patch
>
>
> To address the comment - 
> https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13705032&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13705032

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9720) Rename Client#uuid to Client#clientId

2013-07-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9720:
--

Status: Patch Available  (was: Open)

> Rename Client#uuid to Client#clientId
> -
>
> Key: HADOOP-9720
> URL: https://issues.apache.org/jira/browse/HADOOP-9720
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Suresh Srinivas
> Attachments: HADOOP-9720.patch
>
>
> To address the comment - 
> https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13705032&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13705032

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9720) Rename Client#uuid to Client#clientId

2013-07-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9720:
--

Attachment: HADOOP-9720.patch

Trivial patch to rename the field.

> Rename Client#uuid to Client#clientId
> -
>
> Key: HADOOP-9720
> URL: https://issues.apache.org/jira/browse/HADOOP-9720
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Suresh Srinivas
> Attachments: HADOOP-9720.patch
>
>
> To address the comment - 
> https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13705032&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13705032

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9720) Rename Client#uuid to Client#clientId

2013-07-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9720:
--

Assignee: Arpit Agarwal

> Rename Client#uuid to Client#clientId
> -
>
> Key: HADOOP-9720
> URL: https://issues.apache.org/jira/browse/HADOOP-9720
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Suresh Srinivas
>Assignee: Arpit Agarwal
> Attachments: HADOOP-9720.patch
>
>
> To address the comment - 
> https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13705032&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13705032

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9701) mvn site ambiguous links in hadoop-common

2013-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705110#comment-13705110
 ] 

Hadoop QA commented on HADOOP-9701:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591721/hadoop-9701-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2761//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2761//console

This message is automatically generated.

> mvn site ambiguous links in hadoop-common
> -
>
> Key: HADOOP-9701
> URL: https://issues.apache.org/jira/browse/HADOOP-9701
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9701-1.patch
>
>
> {code}
> [INFO] Rendering site with org.apache.maven.skins:maven-stylus-skin:jar:1.2 
> skin.
> [WARNING] [APT Parser] Ambiguous link: 'InterfaceClassification.html'. If 
> this is a local link, prepend "./"!
> {code}
> Also, noticed a warning in SingleNodeSetup.apt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9720) Rename Client#uuid to Client#clientId

2013-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705112#comment-13705112
 ] 

Suresh Srinivas commented on HADOOP-9720:
-

[~arpitagarwal], thanks for jumping on this. [~shv], does this address your 
comment?

I plan on committing this by the end of the day.

> Rename Client#uuid to Client#clientId
> -
>
> Key: HADOOP-9720
> URL: https://issues.apache.org/jira/browse/HADOOP-9720
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Arpit Agarwal
> Attachments: HADOOP-9720.patch
>
>
> To address the comment - 
> https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13705032&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13705032

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8873:
--

Attachment: HADOOP-8873-3.patch

> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch, 
> HADOOP-8873-3.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8873:
--

Hadoop Flags:   (was: Incompatible change)

> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch, 
> HADOOP-8873-3.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8873:
--

Release Note: FsShell mkdir now accepts a -p flag. Like unix, mkdir -p will 
not fail if the directory already exists.  (was: FsShell mkdir now accepts a -p 
flag. Like unix, mkdir -p will not fail if the directory already exists.
The command doesn't auto-create parent directories without -p flag.)

> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch, 
> HADOOP-8873-3.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8873:
--

Release Note: FsShell mkdir now accepts a -p flag. Like unix, mkdir -p will 
not fail if the directory already exists. Unlike unix, intermediate directories 
are always created, regardless of the flag, to avoid incompatibilities.  (was: 
FsShell mkdir now accepts a -p flag. Like unix, mkdir -p will not fail if the 
directory already exists.)

> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch, 
> HADOOP-8873-3.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705123#comment-13705123
 ] 

Hadoop QA commented on HADOOP-8873:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591737/HADOOP-8873-3.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2763//console

This message is automatically generated.

> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch, 
> HADOOP-8873-3.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705125#comment-13705125
 ] 

Akira AJISAKA commented on HADOOP-8873:
---

[~sureshms] Thank you for telling me the path of the documentation. I attached 
a new patch.

> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch, 
> HADOOP-8873-3.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8873:


Status: Open  (was: Patch Available)

Canceling the patch. Submit patch is an option only against trunk. If done 
against non trunk branches, Jenkins tries to apply it against trunk and fails.

> Port HADOOP-8175 (Add mkdir -p flag) to branch-1
> 
>
> Key: HADOOP-8873
> URL: https://issues.apache.org/jira/browse/HADOOP-8873
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch, 
> HADOOP-8873-3.patch
>
>
> Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
> to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
> currently requires the -p option to create parent directories but a program 
> that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9720) Rename Client#uuid to Client#clientId

2013-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705151#comment-13705151
 ] 

Hadoop QA commented on HADOOP-9720:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591731/HADOOP-9720.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2762//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2762//console

This message is automatically generated.

> Rename Client#uuid to Client#clientId
> -
>
> Key: HADOOP-9720
> URL: https://issues.apache.org/jira/browse/HADOOP-9720
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Arpit Agarwal
> Attachments: HADOOP-9720.patch
>
>
> To address the comment - 
> https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13705032&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13705032

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9720) Rename Client#uuid to Client#clientId

2013-07-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705152#comment-13705152
 ] 

Arpit Agarwal commented on HADOOP-9720:
---

Just renaming a field, no new tests should be necessary.

> Rename Client#uuid to Client#clientId
> -
>
> Key: HADOOP-9720
> URL: https://issues.apache.org/jira/browse/HADOOP-9720
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Arpit Agarwal
> Attachments: HADOOP-9720.patch
>
>
> To address the comment - 
> https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13705032&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13705032

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9698) RPCv9 client must honor server's SASL negotiate response

2013-07-10 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9698:


Priority: Blocker  (was: Critical)

There will be incompatibilities with older clients talking to newer servers if 
clients don't use the same proto/serverId as the server.  If we let the client 
continue to guess we'll be locked in and unable to make changes.

I've had a patch ready since Monday but it needs changes in HADOOP-9683 which 
are pending Suresh's review.  I spoke with Arun yesterday and he is willing to 
wait for this change.

> RPCv9 client must honor server's SASL negotiate response
> 
>
> Key: HADOOP-9698
> URL: https://issues.apache.org/jira/browse/HADOOP-9698
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
>
> As of HADOOP-9421, a RPCv9 server will advertise its authentication methods.  
> This is meant to support features such as IP failover, better token 
> selection, and interoperability in a heterogenous security environment.
> Currently the client ignores the negotiate response and just blindly attempts 
> to authenticate instead of choosing a mutually agreeable auth method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9720) Rename Client#uuid to Client#clientId

2013-07-10 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705165#comment-13705165
 ] 

Konstantin Shvachko commented on HADOOP-9720:
-

+1 Thanks.

> Rename Client#uuid to Client#clientId
> -
>
> Key: HADOOP-9720
> URL: https://issues.apache.org/jira/browse/HADOOP-9720
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Arpit Agarwal
> Attachments: HADOOP-9720.patch
>
>
> To address the comment - 
> https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13705032&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13705032

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9392) Token based authentication and Single Sign On

2013-07-10 Thread Tianyou Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705174#comment-13705174
 ] 

Tianyou Li commented on HADOOP-9392:


Hi James, 

Thanks for reviewing. For Web SSO flow, usually the IdP will issue a token 
which is signed to ensure data integrity. So the token issued by IdP as a 
result of IdP authentication cannot be modified because the signing key is a 
secret of IdP, other parties cannot get the signing key so the token cannot be 
modified. 

Moreover, once client is redirect to IdP for authentication, the client usually 
need to verify and accept server certificate as a step of trust for the IdP via 
SSL(https), in this way to ensure credentials client is providing are routed to 
trusted IdP via secured channel. TAS also need to verify the signature of the 
token issued by that IdP, this step will prove that token is exactly issued by 
the designate IdP and can be authenticated successfully with TAS.

As mentioned above, TLS/SSL should be enabled to protect credentials 
transmission during authentication process with IdP, and mitigate with MITM 
attack. To further improve the client authN security, multi-factor such as 
additional OTP authentication can also be employed, this is one of our design 
goal but might not be explicitly mentioned.

Regards.


> Token based authentication and Single Sign On
> -
>
> Key: HADOOP-9392
> URL: https://issues.apache.org/jira/browse/HADOOP-9392
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: token-based-authn-plus-sso.pdf, 
> token-based-authn-plus-sso-v2.0.pdf
>
>
> This is an umbrella entry for one of project Rhino’s topic, for details of 
> project Rhino, please refer to 
> https://github.com/intel-hadoop/project-rhino/. The major goal for this entry 
> as described in project Rhino was 
>  
> “Core, HDFS, ZooKeeper, and HBase currently support Kerberos authentication 
> at the RPC layer, via SASL. However this does not provide valuable attributes 
> such as group membership, classification level, organizational identity, or 
> support for user defined attributes. Hadoop components must interrogate 
> external resources for discovering these attributes and at scale this is 
> problematic. There is also no consistent delegation model. HDFS has a simple 
> delegation capability, and only Oozie can take limited advantage of it. We 
> will implement a common token based authentication framework to decouple 
> internal user and service authentication from external mechanisms used to 
> support it (like Kerberos)”
>  
> We’d like to start our work from Hadoop-Common and try to provide common 
> facilities by extending existing authentication framework which support:
> 1.Pluggable token provider interface 
> 2.Pluggable token verification protocol and interface
> 3.Security mechanism to distribute secrets in cluster nodes
> 4.Delegation model of user authentication

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9721) Incorrect logging.properties file for hadoop-httpfs

2013-07-10 Thread Mark Grover (JIRA)
Mark Grover created HADOOP-9721:
---

 Summary: Incorrect logging.properties file for hadoop-httpfs
 Key: HADOOP-9721
 URL: https://issues.apache.org/jira/browse/HADOOP-9721
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, conf
Affects Versions: 2.0.4-alpha
 Environment: Maven 3.0.2 on CentOS6.2
Reporter: Mark Grover


Tomcat ships with a default logging.properties file that's generic enough to be 
used however we already override it with a custom log file as seen at 
https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557

This is necessary because we can have the log locations controlled by 
${httpfs.log.dir} (instead of default ${catalina.base}/logs}, control the 
prefix of the log files names, etc.

In any case, this overriding doesn't always happen. In my environment, the 
custom logging.properties file doesn't get overridden. The reason is the 
destination logging.properties file already exists and the maven pom's copy 
command silently fails and doesn't override. If we explicitly delete the 
destination logging.properties file, then the copy command successfully 
completes. You may notice, we do the same thing with server.xml (which doesn't 
have this problem). We explicitly delete the destination file first and then 
copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9721) Incorrect logging.properties file for hadoop-httpfs

2013-07-10 Thread Mark Grover (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705236#comment-13705236
 ] 

Mark Grover commented on HADOOP-9721:
-

Can someone assign this JIRA to me please?

> Incorrect logging.properties file for hadoop-httpfs
> ---
>
> Key: HADOOP-9721
> URL: https://issues.apache.org/jira/browse/HADOOP-9721
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, conf
>Affects Versions: 2.0.4-alpha
> Environment: Maven 3.0.2 on CentOS6.2
>Reporter: Mark Grover
>
> Tomcat ships with a default logging.properties file that's generic enough to 
> be used however we already override it with a custom log file as seen at 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557
> This is necessary because we can have the log locations controlled by 
> ${httpfs.log.dir} (instead of default ${catalina.base}/logs}, control the 
> prefix of the log files names, etc.
> In any case, this overriding doesn't always happen. In my environment, the 
> custom logging.properties file doesn't get overridden. The reason is the 
> destination logging.properties file already exists and the maven pom's copy 
> command silently fails and doesn't override. If we explicitly delete the 
> destination logging.properties file, then the copy command successfully 
> completes. You may notice, we do the same thing with server.xml (which 
> doesn't have this problem). We explicitly delete the destination file first 
> and then copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9721) Incorrect logging.properties file for hadoop-httpfs

2013-07-10 Thread Mark Grover (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Grover updated HADOOP-9721:


Description: 
Tomcat ships with a default logging.properties file that's generic enough to be 
used however we already override it with a custom log file as seen at 
https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557

This is necessary because we can have the log locations controlled by 
httpfs.log.dir env variable (instead of default catalina.base/logs), control 
the prefix of the log files names, etc.

In any case, this overriding doesn't always happen. In my environment, the 
custom logging.properties file doesn't get overridden. The reason is the 
destination logging.properties file already exists and the maven pom's copy 
command silently fails and doesn't override. If we explicitly delete the 
destination logging.properties file, then the copy command successfully 
completes. You may notice, we do the same thing with server.xml (which doesn't 
have this problem). We explicitly delete the destination file first and then 
copy it over. We should do the same with logging.properties as well.

  was:
Tomcat ships with a default logging.properties file that's generic enough to be 
used however we already override it with a custom log file as seen at 
https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557

This is necessary because we can have the log locations controlled by 
${httpfs.log.dir} (instead of default ${catalina.base}/logs}, control the 
prefix of the log files names, etc.

In any case, this overriding doesn't always happen. In my environment, the 
custom logging.properties file doesn't get overridden. The reason is the 
destination logging.properties file already exists and the maven pom's copy 
command silently fails and doesn't override. If we explicitly delete the 
destination logging.properties file, then the copy command successfully 
completes. You may notice, we do the same thing with server.xml (which doesn't 
have this problem). We explicitly delete the destination file first and then 
copy it over. We should do the same with logging.properties as well.


> Incorrect logging.properties file for hadoop-httpfs
> ---
>
> Key: HADOOP-9721
> URL: https://issues.apache.org/jira/browse/HADOOP-9721
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, conf
>Affects Versions: 2.0.4-alpha
> Environment: Maven 3.0.2 on CentOS6.2
>Reporter: Mark Grover
>
> Tomcat ships with a default logging.properties file that's generic enough to 
> be used however we already override it with a custom log file as seen at 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557
> This is necessary because we can have the log locations controlled by 
> httpfs.log.dir env variable (instead of default catalina.base/logs), control 
> the prefix of the log files names, etc.
> In any case, this overriding doesn't always happen. In my environment, the 
> custom logging.properties file doesn't get overridden. The reason is the 
> destination logging.properties file already exists and the maven pom's copy 
> command silently fails and doesn't override. If we explicitly delete the 
> destination logging.properties file, then the copy command successfully 
> completes. You may notice, we do the same thing with server.xml (which 
> doesn't have this problem). We explicitly delete the destination file first 
> and then copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9721) Incorrect logging.properties file for hadoop-httpfs

2013-07-10 Thread Mark Grover (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Grover updated HADOOP-9721:


Attachment: HADOOP-9721.1.patch

Attaching a patch

> Incorrect logging.properties file for hadoop-httpfs
> ---
>
> Key: HADOOP-9721
> URL: https://issues.apache.org/jira/browse/HADOOP-9721
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, conf
>Affects Versions: 2.0.4-alpha
> Environment: Maven 3.0.2 on CentOS6.2
>Reporter: Mark Grover
> Attachments: HADOOP-9721.1.patch
>
>
> Tomcat ships with a default logging.properties file that's generic enough to 
> be used however we already override it with a custom log file as seen at 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557
> This is necessary because we can have the log locations controlled by 
> httpfs.log.dir env variable (instead of default catalina.base/logs), control 
> the prefix of the log files names, etc.
> In any case, this overriding doesn't always happen. In my environment, the 
> custom logging.properties file doesn't get overridden. The reason is the 
> destination logging.properties file already exists and the maven pom's copy 
> command silently fails and doesn't override. If we explicitly delete the 
> destination logging.properties file, then the copy command successfully 
> completes. You may notice, we do the same thing with server.xml (which 
> doesn't have this problem). We explicitly delete the destination file first 
> and then copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9721) Incorrect logging.properties file for hadoop-httpfs

2013-07-10 Thread Mark Grover (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Grover updated HADOOP-9721:


Fix Version/s: 2.1.0-beta
   3.0.0
   Status: Patch Available  (was: Open)

> Incorrect logging.properties file for hadoop-httpfs
> ---
>
> Key: HADOOP-9721
> URL: https://issues.apache.org/jira/browse/HADOOP-9721
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, conf
>Affects Versions: 2.0.4-alpha
> Environment: Maven 3.0.2 on CentOS6.2
>Reporter: Mark Grover
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HADOOP-9721.1.patch
>
>
> Tomcat ships with a default logging.properties file that's generic enough to 
> be used however we already override it with a custom log file as seen at 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557
> This is necessary because we can have the log locations controlled by 
> httpfs.log.dir env variable (instead of default catalina.base/logs), control 
> the prefix of the log files names, etc.
> In any case, this overriding doesn't always happen. In my environment, the 
> custom logging.properties file doesn't get overridden. The reason is the 
> destination logging.properties file already exists and the maven pom's copy 
> command silently fails and doesn't override. If we explicitly delete the 
> destination logging.properties file, then the copy command successfully 
> completes. You may notice, we do the same thing with server.xml (which 
> doesn't have this problem). We explicitly delete the destination file first 
> and then copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705253#comment-13705253
 ] 

Suresh Srinivas commented on HADOOP-9688:
-

bq. storageIDs are unique. There is now probability they will collide as far as 
I remember. I am saying we can do the same for clientID.
The probability of StorageID collision is also non-zero. I am not sure how you 
can assert otherwise. If you see the randomUUID method implementation, it also 
uses SecureRandom class. Also unlike storageID, UUID are compact and 16 bytes.

bq. You know in big clusters most improbable events happen all the time.
I think this improbable even can also happen for StorageID.

Unlike StorageID, retry has many things that further reduce the probability:
# The clientID + callID is used only for retried requests. See the HADOP-9717.
# The validity of an entry in retry cache is short.


> Add globally unique Client ID to RPC requests
> -
>
> Key: HADOOP-9688
> URL: https://issues.apache.org/jira/browse/HADOOP-9688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-9688.clientId.1.patch, 
> HADOOP-9688.clientId.patch, HADOOP-9688.patch
>
>
> This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
> ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-9719.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

+1 for the patch.  I committed this to branch-1-win.  Thank you again, Xi!

> Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
> exit codes
> 
>
> Key: HADOOP-9719
> URL: https://issues.apache.org/jira/browse/HADOOP-9719
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Xi Fang
>Assignee: Xi Fang
>Priority: Minor
>  Labels: test
> Fix For: 1-win
>
> Attachments: HADOOP-9719.patch
>
>
> TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to 
> change group association of files to "admin".
> {code}
> // Test 1: exit code for chgrp on existing file is 0
> String argv[] = { "-chgrp", "admin", f1 };
> verify(fs, "-chgrp", argv, 1, fsShell, 0);
> {code}
> On Windows, this is the error information:
> org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
> (1332): No mapping between account names and security IDs was done.
> Invalid group name: admin
> This test case passed previously, but it looks like this test case 
> incorrectly passed because of another bug in FsShell@runCmdHandler 
> (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
> FsShell#runCmdHandler may not return error exit codes for some exceptions 
> (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
> FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
> previous Branch-1-win if even if admin is not a valid group, there is no 
> error caught. The fix of HADOOP-9502 makes this test fail.
> This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705257#comment-13705257
 ] 

Xi Fang commented on HADOOP-9719:
-

Thanks Chris!

> Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
> exit codes
> 
>
> Key: HADOOP-9719
> URL: https://issues.apache.org/jira/browse/HADOOP-9719
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Xi Fang
>Assignee: Xi Fang
>Priority: Minor
>  Labels: test
> Fix For: 1-win
>
> Attachments: HADOOP-9719.patch
>
>
> TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to 
> change group association of files to "admin".
> {code}
> // Test 1: exit code for chgrp on existing file is 0
> String argv[] = { "-chgrp", "admin", f1 };
> verify(fs, "-chgrp", argv, 1, fsShell, 0);
> {code}
> On Windows, this is the error information:
> org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
> (1332): No mapping between account names and security IDs was done.
> Invalid group name: admin
> This test case passed previously, but it looks like this test case 
> incorrectly passed because of another bug in FsShell@runCmdHandler 
> (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
> FsShell#runCmdHandler may not return error exit codes for some exceptions 
> (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
> FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
> previous Branch-1-win if even if admin is not a valid group, there is no 
> error caught. The fix of HADOOP-9502 makes this test fail.
> This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9721) Incorrect logging.properties file for hadoop-httpfs

2013-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705259#comment-13705259
 ] 

Hadoop QA commented on HADOOP-9721:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591761/HADOOP-9721.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-httpfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2764//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2764//console

This message is automatically generated.

> Incorrect logging.properties file for hadoop-httpfs
> ---
>
> Key: HADOOP-9721
> URL: https://issues.apache.org/jira/browse/HADOOP-9721
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, conf
>Affects Versions: 2.0.4-alpha
> Environment: Maven 3.0.2 on CentOS6.2
>Reporter: Mark Grover
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HADOOP-9721.1.patch
>
>
> Tomcat ships with a default logging.properties file that's generic enough to 
> be used however we already override it with a custom log file as seen at 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557
> This is necessary because we can have the log locations controlled by 
> httpfs.log.dir env variable (instead of default catalina.base/logs), control 
> the prefix of the log files names, etc.
> In any case, this overriding doesn't always happen. In my environment, the 
> custom logging.properties file doesn't get overridden. The reason is the 
> destination logging.properties file already exists and the maven pom's copy 
> command silently fails and doesn't override. If we explicitly delete the 
> destination logging.properties file, then the copy command successfully 
> completes. You may notice, we do the same thing with server.xml (which 
> doesn't have this problem). We explicitly delete the destination file first 
> and then copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9722) Branch-1-win TestNativeIO failed caused by Window incompatible test case

2013-07-10 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9722:
---

 Summary: Branch-1-win TestNativeIO failed caused by Window 
incompatible test case
 Key: HADOOP-9722
 URL: https://issues.apache.org/jira/browse/HADOOP-9722
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win


org.apache.hadoop.io.nativeio.TestNativeIO#testPosixFadvise() failed on 
Windows. Here is the error information.
\dev\zero (The system cannot find the path specified)
java.io.FileNotFoundException: \dev\zero (The system cannot find the path 
specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:120)
at java.io.FileInputStream.(FileInputStream.java:79)
at 
org.apache.hadoop.io.nativeio.TestNativeIO.testPosixFadvise(TestNativeIO.java:277)
The root cause of this is "/dev/zero" is used and Windows does not have devices 
like the unix /dev/zero or /dev/random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9722) Branch-1-win TestNativeIO failed caused by Window incompatible test case

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9722:


Attachment: HADOOP-9722.patch

A patch was attached. For Windows, we skip test "testPosixFadvise". 

> Branch-1-win TestNativeIO failed caused by Window incompatible test case
> 
>
> Key: HADOOP-9722
> URL: https://issues.apache.org/jira/browse/HADOOP-9722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
> Environment: Windows
>Reporter: Xi Fang
>Assignee: Xi Fang
>Priority: Minor
> Fix For: 1-win
>
> Attachments: HADOOP-9722.patch
>
>
> org.apache.hadoop.io.nativeio.TestNativeIO#testPosixFadvise() failed on 
> Windows. Here is the error information.
> \dev\zero (The system cannot find the path specified)
> java.io.FileNotFoundException: \dev\zero (The system cannot find the path 
> specified)
> at java.io.FileInputStream.open(Native Method)
> at java.io.FileInputStream.(FileInputStream.java:120)
> at java.io.FileInputStream.(FileInputStream.java:79)
> at 
> org.apache.hadoop.io.nativeio.TestNativeIO.testPosixFadvise(TestNativeIO.java:277)
> The root cause of this is "/dev/zero" is used and Windows does not have 
> devices like the unix /dev/zero or /dev/random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9722) Branch-1-win TestNativeIO failed caused by Window incompatible test case

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9722 started by Xi Fang.

> Branch-1-win TestNativeIO failed caused by Window incompatible test case
> 
>
> Key: HADOOP-9722
> URL: https://issues.apache.org/jira/browse/HADOOP-9722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
> Environment: Windows
>Reporter: Xi Fang
>Assignee: Xi Fang
>Priority: Minor
> Fix For: 1-win
>
> Attachments: HADOOP-9722.patch
>
>
> org.apache.hadoop.io.nativeio.TestNativeIO#testPosixFadvise() failed on 
> Windows. Here is the error information.
> \dev\zero (The system cannot find the path specified)
> java.io.FileNotFoundException: \dev\zero (The system cannot find the path 
> specified)
> at java.io.FileInputStream.open(Native Method)
> at java.io.FileInputStream.(FileInputStream.java:120)
> at java.io.FileInputStream.(FileInputStream.java:79)
> at 
> org.apache.hadoop.io.nativeio.TestNativeIO.testPosixFadvise(TestNativeIO.java:277)
> The root cause of this is "/dev/zero" is used and Windows does not have 
> devices like the unix /dev/zero or /dev/random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9673) NetworkTopology: when a node can't be added, print out its location for diagnostic purposes

2013-07-10 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-9673:
---

Fix Version/s: 2.1.0-beta

> NetworkTopology: when a node can't be added, print out its location for 
> diagnostic purposes
> ---
>
> Key: HADOOP-9673
> URL: https://issues.apache.org/jira/browse/HADOOP-9673
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.2.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9673.001.patch
>
>
> It would be nice if NetworkTopology would print out the network location of a 
> node if it couldn't be added.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9661) Allow metrics sources to be extended

2013-07-10 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-9661:
---

Fix Version/s: (was: 2.2.0)
   2.1.0-beta

> Allow metrics sources to be extended
> 
>
> Key: HADOOP-9661
> URL: https://issues.apache.org/jira/browse/HADOOP-9661
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.0.5-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9661.patch
>
>
> My use case is to create an FSQueueMetrics that extends QueueMetrics and 
> includes some additional fair-scheduler-specific information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9355) Abstract symlink tests to use either FileContext or FileSystem

2013-07-10 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-9355:
---

Fix Version/s: (was: 2.2.0)
   2.1.0-beta

> Abstract symlink tests to use either FileContext or FileSystem
> --
>
> Key: HADOOP-9355
> URL: https://issues.apache.org/jira/browse/HADOOP-9355
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.1.0-beta
>
> Attachments: hadoop-9355-1.patch, hadoop-9355-2.patch, 
> hadoop-9355-3.patch, hadoop-9355-5.patch, hadoop-9355-6.patch, 
> hadoop-9355-7.patch, hadoop-9355-wip.patch
>
>
> We'd like to run the symlink tests using both FileContext and the upcoming 
> FileSystem implementation. The first step here is abstracting the test logic 
> to run on an abstract filesystem implementation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9416) Add new symlink resolution methods in FileSystem and FileSystemLinkResolver

2013-07-10 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-9416:
---

Fix Version/s: 2.1.0-beta

> Add new symlink resolution methods in FileSystem and FileSystemLinkResolver
> ---
>
> Key: HADOOP-9416
> URL: https://issues.apache.org/jira/browse/HADOOP-9416
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.1.0-beta
>
> Attachments: hadoop-9416-1.patch, hadoop-9416-2.patch, 
> hadoop-9416-3.patch, hadoop-9416-4.patch, hadoop-9416-5.patch, 
> hadoop-9416-6.patch, hadoop-9416-7.patch, hadoop-9416-8.patch, 
> hadoop-9416-9.patch
>
>
> Add new methods for symlink resolution to FileSystem, and add resolution 
> support for FileSystem to FileSystemLinkResolver.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9418) Add symlink resolution support to DistributedFileSystem

2013-07-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9418:


Attachment: hadoop-9418-8.patch

Another rev. I missed some DFS-only methods like recoverLease, concat, 
isFileClosed, and snapshots, so I added symlink resolution and tests for these 
methods.

Colin also noticed that fixRelativePath was getting called twice unnecessarily, 
so I removed it from getPathName and audited its usage in DFS.

> Add symlink resolution support to DistributedFileSystem
> ---
>
> Key: HADOOP-9418
> URL: https://issues.apache.org/jira/browse/HADOOP-9418
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-9418-1.patch, hadoop-9418-2.patch, 
> hadoop-9418-3.patch, hadoop-9418-4.patch, hadoop-9418-5.patch, 
> hadoop-9418-6.patch, hadoop-9418-7.patch, hadoop-9418-8.patch
>
>
> Add symlink resolution support to DistributedFileSystem as well as tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >