[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2012-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494558#comment-13494558
 ] 

Hadoop QA commented on HADOOP-8419:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12552831/HADOOP-8419-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1729//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1729//console

This message is automatically generated.

> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, HADOOP-8419-trunk.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2012-11-09 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HADOOP-8419:
--

Status: Patch Available  (was: In Progress)

> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, HADOOP-8419-trunk.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9019) KerberosAuthenticator.doSpnegoSequence(..) should create a HTTP principal with hostname everytime

2012-11-09 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494482#comment-13494482
 ] 

Aaron T. Myers commented on HADOOP-9019:


Got it. Thanks for the explanation.

I'm not opposed to this change, but it does seem like a bit of an odd use case. 
These machines have to have hostnames (with properly configured reverse DNS, no 
less) so I don't understand why folks would want to put IP addresses in their 
configs.

I won't object to the change if folks want to make it, though.

> KerberosAuthenticator.doSpnegoSequence(..) should create a HTTP principal 
> with hostname everytime 
> --
>
> Key: HADOOP-9019
> URL: https://issues.apache.org/jira/browse/HADOOP-9019
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinay
>
> in KerberosAuthenticator.doSpnegoSequence(..) following line of code will 
> just create a principal of the form "HTTP/",
> {code}String servicePrincipal = 
> KerberosUtil.getServicePrincipal("HTTP",
> KerberosAuthenticator.this.url.getHost());{code}
>  but uri.getHost() is not sure of always getting hostname. If uri contains 
> IP, then it just returns IP.
> For SPNEGO authentication principal should always be created with .
> This code should be something like this, which will look /etc/hosts to get 
> hostname
> {code}String hostname = InetAddress.getByName(
> KerberosAuthenticator.this.url.getHost()).getHostName();
> String servicePrincipal = KerberosUtil.getServicePrincipal("HTTP",
> hostname);{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9021) Enforce configured SASL method on the server

2012-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494476#comment-13494476
 ] 

Hadoop QA commented on HADOOP-9021:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552915/HADOOP-9021.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1728//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1728//console

This message is automatically generated.

> Enforce configured SASL method on the server
> 
>
> Key: HADOOP-9021
> URL: https://issues.apache.org/jira/browse/HADOOP-9021
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-9021.patch
>
>
> The RPC needs to restrict itself to only using the configured SASL method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8977) multiple FsShell test failures on Windows

2012-11-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-8977.
-

   Resolution: Fixed
Fix Version/s: trunk-win
 Hadoop Flags: Reviewed

I committed the patch. Thank you Chris. Thank you Arpit for the review.

> multiple FsShell test failures on Windows
> -
>
> Key: HADOOP-8977
> URL: https://issues.apache.org/jira/browse/HADOOP-8977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: trunk-win
>
> Attachments: HADOOP-8977-branch-trunk-win.patch, 
> HADOOP-8977-branch-trunk-win.patch, HADOOP-8977-branch-trunk-win.patch, 
> HADOOP-8977.patch
>
>
> Multiple FsShell-related tests fail on Windows.  Commands are returning 
> non-zero exit status.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6311) Add support for unix domain sockets to JNI libs

2012-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494456#comment-13494456
 ] 

Hadoop QA commented on HADOOP-6311:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552900/HADOOP-6311.028.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1727//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1727//console

This message is automatically generated.

> Add support for unix domain sockets to JNI libs
> ---
>
> Key: HADOOP-6311
> URL: https://issues.apache.org/jira/browse/HADOOP-6311
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: native
>Affects Versions: 0.20.0
>Reporter: Todd Lipcon
>Assignee: Colin Patrick McCabe
> Attachments: 6311-trunk-inprogress.txt, design.txt, 
> HADOOP-6311.014.patch, HADOOP-6311.016.patch, HADOOP-6311.018.patch, 
> HADOOP-6311.020b.patch, HADOOP-6311.020.patch, HADOOP-6311.021.patch, 
> HADOOP-6311.022.patch, HADOOP-6311.023.patch, HADOOP-6311.024.patch, 
> HADOOP-6311.027.patch, HADOOP-6311.028.patch, HADOOP-6311-0.patch, 
> HADOOP-6311-1.patch, hadoop-6311.txt
>
>
> For HDFS-347 we need to use unix domain sockets. This JIRA is to include a 
> library in common which adds a o.a.h.net.unix package based on the code from 
> Android (apache 2 license)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8977) multiple FsShell test failures on Windows

2012-11-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8977:


Attachment: HADOOP-8977.patch

Updated patch with minor nit taken care of.

> multiple FsShell test failures on Windows
> -
>
> Key: HADOOP-8977
> URL: https://issues.apache.org/jira/browse/HADOOP-8977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8977-branch-trunk-win.patch, 
> HADOOP-8977-branch-trunk-win.patch, HADOOP-8977-branch-trunk-win.patch, 
> HADOOP-8977.patch
>
>
> Multiple FsShell-related tests fail on Windows.  Commands are returning 
> non-zero exit status.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8977) multiple FsShell test failures on Windows

2012-11-09 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494452#comment-13494452
 ] 

Suresh Srinivas commented on HADOOP-8977:
-

patch looks good. Minor nits - use "} catch {" per coding convention


> multiple FsShell test failures on Windows
> -
>
> Key: HADOOP-8977
> URL: https://issues.apache.org/jira/browse/HADOOP-8977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-8977-branch-trunk-win.patch, 
> HADOOP-8977-branch-trunk-win.patch, HADOOP-8977-branch-trunk-win.patch
>
>
> Multiple FsShell-related tests fail on Windows.  Commands are returning 
> non-zero exit status.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9021) Enforce configured SASL method on the server

2012-11-09 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494434#comment-13494434
 ] 

Robert Joseph Evans commented on HADOOP-9021:
-

My only real issue with this is that if no secret manager is passed in, and 
only TOKEN is passed in you get an error of "Server is not configured to accept 
any authentication".  I would prefer to see at a minimum before that a Warning 
that TOKEN was configured and no Secret manager was passed in. But preferably a 
better error saying A Server configured with Token must have a Secret manager 
to go with it.

> Enforce configured SASL method on the server
> 
>
> Key: HADOOP-9021
> URL: https://issues.apache.org/jira/browse/HADOOP-9021
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-9021.patch
>
>
> The RPC needs to restrict itself to only using the configured SASL method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9021) Enforce configured SASL method on the server

2012-11-09 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9021:


Attachment: HADOOP-9021.patch

Server will accept only the configured auth type, and/or will accept tokens if 
instantiated with a secret manager.

> Enforce configured SASL method on the server
> 
>
> Key: HADOOP-9021
> URL: https://issues.apache.org/jira/browse/HADOOP-9021
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-9021.patch
>
>
> The RPC needs to restrict itself to only using the configured SASL method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9021) Enforce configured SASL method on the server

2012-11-09 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9021:


Status: Patch Available  (was: Open)

> Enforce configured SASL method on the server
> 
>
> Key: HADOOP-9021
> URL: https://issues.apache.org/jira/browse/HADOOP-9021
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Affects Versions: 2.0.0-alpha, 0.23.0, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-9021.patch
>
>
> The RPC needs to restrict itself to only using the configured SASL method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9008) Building hadoop tarball fails on Windows

2012-11-09 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9008:
--

Attachment: HADOOP-9008-branch-trunk-win.patch

Sorry, uploading the patch one more time, back to including just 
hadoop-dist/pom.xml and hadoop-project-dist/pom.xml.  I forgot that the other 
module-specific ones are tracked in separate jiras.

> Building hadoop tarball fails on Windows
> 
>
> Key: HADOOP-9008
> URL: https://issues.apache.org/jira/browse/HADOOP-9008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Ivan Mitic
>Assignee: Chris Nauroth
> Attachments: HADOOP-9008-branch-trunk-win.patch, 
> HADOOP-9008-branch-trunk-win.patch, HADOOP-9008-branch-trunk-win.patch, 
> HADOOP-9008-branch-trunk-win.patch
>
>
> Trying to build Hadoop trunk tarball via {{mvn package -Pdist -DskipTests 
> -Dtar}} fails on Windows.
> Build system generates sh scripts that execute build tasks what does not work 
> on Windows without Cygwin. It might make sense to apply the same pattern as 
> in HADOOP-8924, and use python instead of sh.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature

2012-11-09 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494356#comment-13494356
 ] 

Daryn Sharp commented on HADOOP-8989:
-

Wow, nice complement of expressions.  I'll try to review soon.

> hadoop dfs -find feature
> 
>
> Key: HADOOP-8989
> URL: https://issues.apache.org/jira/browse/HADOOP-8989
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Marco Nicosia
>Assignee: Jonathan Allen
> Attachments: HADOOP-8989.patch, HADOOP-8989.patch
>
>
> Both sysadmins and users make frequent use of the unix 'find' command, but 
> Hadoop has no correlate. Without this, users are writing scripts which make 
> heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
> -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
> client side. Possibly an in-NameNode find operation would be only a bit more 
> taxing on the NameNode, but significantly faster from the client's point of 
> view?
> The minimum set of options I can think of which would make a Hadoop find 
> command generally useful is (in priority order):
> * -type (file or directory, for now)
> * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
> * -print0 (for piping to xargs -0)
> * -depth
> * -owner/-group (and -nouser/-nogroup)
> * -name (allowing for shell pattern, or even regex?)
> * -perm
> * -size
> One possible special case, but could possibly be really cool if it ran from 
> within the NameNode:
> * -delete
> The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow.
> Lower priority, some people do use operators, mostly to execute -or searches 
> such as:
> * find / \(-nouser -or -nogroup\)
> Finally, I thought I'd include a link to the [Posix spec for 
> find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9008) Building hadoop tarball fails on Windows

2012-11-09 Thread Raja Aluri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494351#comment-13494351
 ] 

Raja Aluri commented on HADOOP-9008:


Chris.
It is just for code readability sake. I will leave it to you. 
Sorry for not pointing this out earlier. 
Please consider using os.path.join() or  os.path.abspath(os.path.join()) 
instead of normpath for cross platform sake. 

Also for building directory structure like the one below without platform 
specific '/'s

{code}
 arc_name = base_name + "/lib/native"
{code}
can be
{code}
os.path.join(base_name, 'lib' , 'native')
{code}
{code}
>>> os.path.abspath(".")
'/Users/raja/work/repos/'
>>> os.path.abspath("../")
'/Users/raja/work'
>>> os.path.normpath("../../")
'../..'
>>> os.path.normpath("..")
'..'
>>> os.path.normpath("..")

{code}
{code}

> Building hadoop tarball fails on Windows
> 
>
> Key: HADOOP-9008
> URL: https://issues.apache.org/jira/browse/HADOOP-9008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Ivan Mitic
>Assignee: Chris Nauroth
> Attachments: HADOOP-9008-branch-trunk-win.patch, 
> HADOOP-9008-branch-trunk-win.patch, HADOOP-9008-branch-trunk-win.patch
>
>
> Trying to build Hadoop trunk tarball via {{mvn package -Pdist -DskipTests 
> -Dtar}} fails on Windows.
> Build system generates sh scripts that execute build tasks what does not work 
> on Windows without Cygwin. It might make sense to apply the same pattern as 
> in HADOOP-8924, and use python instead of sh.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-6311) Add support for unix domain sockets to JNI libs

2012-11-09 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-6311:
-

Attachment: HADOOP-6311.028.patch

fix another JavaDoc warning.

> Add support for unix domain sockets to JNI libs
> ---
>
> Key: HADOOP-6311
> URL: https://issues.apache.org/jira/browse/HADOOP-6311
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: native
>Affects Versions: 0.20.0
>Reporter: Todd Lipcon
>Assignee: Colin Patrick McCabe
> Attachments: 6311-trunk-inprogress.txt, design.txt, 
> HADOOP-6311.014.patch, HADOOP-6311.016.patch, HADOOP-6311.018.patch, 
> HADOOP-6311.020b.patch, HADOOP-6311.020.patch, HADOOP-6311.021.patch, 
> HADOOP-6311.022.patch, HADOOP-6311.023.patch, HADOOP-6311.024.patch, 
> HADOOP-6311.027.patch, HADOOP-6311.028.patch, HADOOP-6311-0.patch, 
> HADOOP-6311-1.patch, hadoop-6311.txt
>
>
> For HDFS-347 we need to use unix domain sockets. This JIRA is to include a 
> library in common which adds a o.a.h.net.unix package based on the code from 
> Android (apache 2 license)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8860) Split MapReduce and YARN sections in documentation navigation

2012-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494340#comment-13494340
 ] 

Hudson commented on HADOOP-8860:


Integrated in Hadoop-trunk-Commit #2995 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2995/])
HADOOP-8860. Amendment to CHANGES.txt (tucu) (Revision 1407662)
HADOOP-8860. Split MapReduce and YARN sections in documentation navigation. 
(tomwhite via tucu) (Revision 1407658)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1407662
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1407658
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CLIMiniCluster.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleCluster.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/Federation.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HDFSHighAvailabilityWithNFS.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HDFSHighAvailabilityWithQJM.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
* /hadoop/common/trunk/hadoop-project/src/site/apt/index.apt.vm
* /hadoop/common/trunk/hadoop-project/src/site/site.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CLIMiniCluster.apt.vm
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ClusterSetup.apt.vm
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/EncryptedShuffle.apt.vm
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/Federation.apt.vm
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/HDFSHighAvailabilityWithNFS.apt.vm
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/HDFSHighAvailabilityWithQJM.apt.vm
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/SingleCluster.apt.vm
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WebHDFS.apt.vm
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/index.apt.vm


> Split MapReduce and YARN sections in documentation navigation
> -
>
> Key: HADOOP-8860
> URL: https://issues.apache.org/jira/browse/HADOOP-8860
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.0.1-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-8860.patch, HADOOP-8860.patch, HADOOP-8860.sh, 
> HADOOP-8860.sh
>
>
> This JIRA is to change the navigation on 
> http://hadoop.apache.org/docs/r2.0.1-alpha/ to reflect the fact that 
> MapReduce and YARN are separate modules/sub-projects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8860) Split MapReduce and YARN sections in documentation navigation

2012-11-09 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494338#comment-13494338
 ] 

Alejandro Abdelnur commented on HADOOP-8860:


Tom, I've committed the patch to trunk. Trying to backport it to branch-2 it 
fails because of the changes in the last patch, I guess the HDFS doc changes 
didn't make it to branch-2 yet. 

> Split MapReduce and YARN sections in documentation navigation
> -
>
> Key: HADOOP-8860
> URL: https://issues.apache.org/jira/browse/HADOOP-8860
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.0.1-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-8860.patch, HADOOP-8860.patch, HADOOP-8860.sh, 
> HADOOP-8860.sh
>
>
> This JIRA is to change the navigation on 
> http://hadoop.apache.org/docs/r2.0.1-alpha/ to reflect the fact that 
> MapReduce and YARN are separate modules/sub-projects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8860) Split MapReduce and YARN sections in documentation navigation

2012-11-09 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494336#comment-13494336
 ] 

Alejandro Abdelnur commented on HADOOP-8860:


+1

> Split MapReduce and YARN sections in documentation navigation
> -
>
> Key: HADOOP-8860
> URL: https://issues.apache.org/jira/browse/HADOOP-8860
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.0.1-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-8860.patch, HADOOP-8860.patch, HADOOP-8860.sh, 
> HADOOP-8860.sh
>
>
> This JIRA is to change the navigation on 
> http://hadoop.apache.org/docs/r2.0.1-alpha/ to reflect the fact that 
> MapReduce and YARN are separate modules/sub-projects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9008) Building hadoop tarball fails on Windows

2012-11-09 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9008:
--

Attachment: HADOOP-9008-branch-trunk-win.patch

Thank you, Raja.  I think you're right.  I have attached a new patch that 
simplifies the make_file_filter function and adds some comments to help prevent 
confusion around the Python tarfile.add API.

Also, I realized that my earlier patch did not include all of the files that I 
changed.  Can you please take another look?  There are smaller changes in a few 
other pom.xml files.


> Building hadoop tarball fails on Windows
> 
>
> Key: HADOOP-9008
> URL: https://issues.apache.org/jira/browse/HADOOP-9008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Ivan Mitic
>Assignee: Chris Nauroth
> Attachments: HADOOP-9008-branch-trunk-win.patch, 
> HADOOP-9008-branch-trunk-win.patch, HADOOP-9008-branch-trunk-win.patch
>
>
> Trying to build Hadoop trunk tarball via {{mvn package -Pdist -DskipTests 
> -Dtar}} fails on Windows.
> Build system generates sh scripts that execute build tasks what does not work 
> on Windows without Cygwin. It might make sense to apply the same pattern as 
> in HADOOP-8924, and use python instead of sh.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-11-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-8963.
-

   Resolution: Fixed
Fix Version/s: 1.2.0
 Hadoop Flags: Reviewed

Committed the patch to branch-1.

Thank you Arpit.

> CopyFromLocal doesn't always create user directory
> --
>
> Key: HADOOP-8963
> URL: https://issues.apache.org/jira/browse/HADOOP-8963
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Billie Rinaldi
>Assignee: Arpit Gupta
>Priority: Trivial
> Fix For: 1.2.0
>
> Attachments: HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch, 
> HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch, 
> HADOOP-8963.branch-1.patch
>
>
> When you use the command "hadoop fs -copyFromLocal filename ." before the 
> /user/username directory has been created, the file is created with name 
> /user/username instead of a directory being created with file 
> /user/username/filename.  The command "hadoop fs -copyFromLocal filename 
> filename" works as expected, creating /user/username and 
> /user/username/filename, and "hadoop fs -copyFromLocal filename ." works as 
> expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-11-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8963:


Attachment: HADOOP-8963.branch-1.patch

Minor update to the patch to change the comment format to doc comments.

> CopyFromLocal doesn't always create user directory
> --
>
> Key: HADOOP-8963
> URL: https://issues.apache.org/jira/browse/HADOOP-8963
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Billie Rinaldi
>Assignee: Arpit Gupta
>Priority: Trivial
> Attachments: HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch, 
> HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch, 
> HADOOP-8963.branch-1.patch
>
>
> When you use the command "hadoop fs -copyFromLocal filename ." before the 
> /user/username directory has been created, the file is created with name 
> /user/username instead of a directory being created with file 
> /user/username/filename.  The command "hadoop fs -copyFromLocal filename 
> filename" works as expected, creating /user/username and 
> /user/username/filename, and "hadoop fs -copyFromLocal filename ." works as 
> expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9020) Add a SASL PLAIN server

2012-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494299#comment-13494299
 ] 

Hudson commented on HADOOP-9020:


Integrated in Hadoop-trunk-Commit #2994 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2994/])
HADOOP-9020. Add a SASL PLAIN server (daryn via bobby) (Revision 1407622)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1407622
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


> Add a SASL PLAIN server
> ---
>
> Key: HADOOP-9020
> URL: https://issues.apache.org/jira/browse/HADOOP-9020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-9020.patch
>
>
> Java includes a SASL PLAIN client but not a server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9020) Add a SASL PLAIN server

2012-11-09 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9020:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this into trunk and branch-2

> Add a SASL PLAIN server
> ---
>
> Key: HADOOP-9020
> URL: https://issues.apache.org/jira/browse/HADOOP-9020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-9020.patch
>
>
> Java includes a SASL PLAIN client but not a server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9020) Add a SASL PLAIN server

2012-11-09 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494293#comment-13494293
 ] 

Robert Joseph Evans commented on HADOOP-9020:
-

Looks good to me. I am not a SASL expert, but as much as I can tell it meets 
the standard and complies with the SaslServer API.

I wonder a bit about having PLAIN be installed programatically instead of 
through the java security configuration, but I think it is OK because it just 
for Hadoop.

+1

> Add a SASL PLAIN server
> ---
>
> Key: HADOOP-9020
> URL: https://issues.apache.org/jira/browse/HADOOP-9020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-9020.patch
>
>
> Java includes a SASL PLAIN client but not a server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9008) Building hadoop tarball fails on Windows

2012-11-09 Thread Raja Aluri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494270#comment-13494270
 ] 

Raja Aluri commented on HADOOP-9008:


+1 for the change. A minor nit, if you may choose to change,
I was wondering if the following code can eliminate some of the 'else' loops.
{code}
  def filter_func(tar_info):
if tar_info.name == root:
  return tar_info
elif tar_info.isfile() or tar_info.issym():
  if file_name_filter(basename(tar_info.name)):
return tar_info
  else:
return None
else:
  return None
  return filter_func
{code}
I just thought the code will be more concise
{code}
  def filter_func(tar_info):
if tar_info.name == root:
  return tar_info
if tar_info.isfile() or tar_info.issym():
  if file_name_filter(basename(tar_info.name)):
return tar_info
 return None
  return filter_func
{code}

> Building hadoop tarball fails on Windows
> 
>
> Key: HADOOP-9008
> URL: https://issues.apache.org/jira/browse/HADOOP-9008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Ivan Mitic
>Assignee: Chris Nauroth
> Attachments: HADOOP-9008-branch-trunk-win.patch, 
> HADOOP-9008-branch-trunk-win.patch
>
>
> Trying to build Hadoop trunk tarball via {{mvn package -Pdist -DskipTests 
> -Dtar}} fails on Windows.
> Build system generates sh scripts that execute build tasks what does not work 
> on Windows without Cygwin. It might make sense to apply the same pattern as 
> in HADOOP-8924, and use python instead of sh.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9011) saveVersion.py does not include branch in version annotation

2012-11-09 Thread Raja Aluri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494240#comment-13494240
 ] 

Raja Aluri commented on HADOOP-9011:


This part of the code seems unrelated to the change. (changing the variable 
name)
Since it is a trivial change, +1 for the change.
{code}
-current_branch = filter_current_branch.search(branch).group(1).strip()
-url = "%s on branch %s" % (origin, current_branch)
+branch = filter_current_branch.search(branch).group(1).strip()
+url = "%s on branch %s" % (origin, branch)
{code}


> saveVersion.py does not include branch in version annotation
> 
>
> Key: HADOOP-9011
> URL: https://issues.apache.org/jira/browse/HADOOP-9011
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-9011-branch-trunk-win.patch
>
>
> HADOOP-8924 created saveVersion.py on branch-trunk-win.  Unlike 
> saveVersion.sh on trunk, it did not include the branch attribute in the 
> version annotation.  This causes errors at runtime for anything that tries to 
> read the annotation via VersionInfo.  This also causes a unit test failure in 
> TestYarnVersionInfo.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9023) HttpFs is too restrictive on usernames

2012-11-09 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494237#comment-13494237
 ] 

Harsh J commented on HADOOP-9023:
-

That makes sense and is also more relaxed than the current matcher I think.

> HttpFs is too restrictive on usernames
> --
>
> Key: HADOOP-9023
> URL: https://issues.apache.org/jira/browse/HADOOP-9023
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Harsh J
>
> HttpFs tries to use UserProfile.USER_PATTERN to match all usernames before a 
> doAs impersonation function. This regex is too strict for most usernames, as 
> it disallows any special character at all. We should relax it more or ditch 
> needing to match things there.
> WebHDFS currently has no such limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8989) hadoop dfs -find feature

2012-11-09 Thread Jonathan Allen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Allen updated HADOOP-8989:
---

Attachment: HADOOP-8989.patch

Not complete but what's there is stable and could be reviewed if somebody wants 
to look at it.  I'll just be adding expressions and tidying up the 
documentation now.

The following expressions are implemented as per the posix definition: -a, 
-atime, -depth, -group, -mtime, -name, -newer, -nogroup, -not, -nouser, -o, 
-perm, -print, -prune, -size, -type, -user.

I haven't included the following posix expressions as they don't look 
applicable here: -xdev, -links, -ctime.

I've still got to add -exec and any other non-posix extensions that look useful.

> hadoop dfs -find feature
> 
>
> Key: HADOOP-8989
> URL: https://issues.apache.org/jira/browse/HADOOP-8989
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Marco Nicosia
>Assignee: Jonathan Allen
> Attachments: HADOOP-8989.patch, HADOOP-8989.patch
>
>
> Both sysadmins and users make frequent use of the unix 'find' command, but 
> Hadoop has no correlate. Without this, users are writing scripts which make 
> heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
> -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
> client side. Possibly an in-NameNode find operation would be only a bit more 
> taxing on the NameNode, but significantly faster from the client's point of 
> view?
> The minimum set of options I can think of which would make a Hadoop find 
> command generally useful is (in priority order):
> * -type (file or directory, for now)
> * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
> * -print0 (for piping to xargs -0)
> * -depth
> * -owner/-group (and -nouser/-nogroup)
> * -name (allowing for shell pattern, or even regex?)
> * -perm
> * -size
> One possible special case, but could possibly be really cool if it ran from 
> within the NameNode:
> * -delete
> The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow.
> Lower priority, some people do use operators, mostly to execute -or searches 
> such as:
> * find / \(-nouser -or -nogroup\)
> Finally, I thought I'd include a link to the [Posix spec for 
> find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-09 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-7115:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

committed to trunk and branch-2.

> Add a cache for getpwuid_r and getpwgid_r calls
> ---
>
> Key: HADOOP-7115
> URL: https://issues.apache.org/jira/browse/HADOOP-7115
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
>Reporter: Arun C Murthy
>Assignee: Alejandro Abdelnur
> Fix For: 0.22.1, 2.0.3-alpha
>
> Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
> hadoop-7115-0.22.patch, HADOOP-7115.patch, HADOOP-7115.patch, 
> HADOOP-7115.patch, HADOOP-7115.patch
>
>
> As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9023) HttpFs is too restrictive on usernames

2012-11-09 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494197#comment-13494197
 ] 

Alejandro Abdelnur commented on HADOOP-9023:


In a secure setup the user names must match a Unix user name. 
[a-z_][a-z0-9_-]*[$] with a min/max of 1/31

The correct thing to do would be the USER_PATTERN to enforce the same pattern 
as UNIX, both in HttpFS and in WebHDFS

> HttpFs is too restrictive on usernames
> --
>
> Key: HADOOP-9023
> URL: https://issues.apache.org/jira/browse/HADOOP-9023
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Harsh J
>
> HttpFs tries to use UserProfile.USER_PATTERN to match all usernames before a 
> doAs impersonation function. This regex is too strict for most usernames, as 
> it disallows any special character at all. We should relax it more or ditch 
> needing to match things there.
> WebHDFS currently has no such limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494190#comment-13494190
 ] 

Hudson commented on HADOOP-7115:


Integrated in Hadoop-trunk-Commit #2992 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2992/])
HADOOP-7115. Add a cache for getpwuid_r and getpwgid_r calls (tucu) 
(Revision 1407577)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1407577
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SecureIOUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/nativeio/TestNativeIO.java


> Add a cache for getpwuid_r and getpwgid_r calls
> ---
>
> Key: HADOOP-7115
> URL: https://issues.apache.org/jira/browse/HADOOP-7115
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
>Reporter: Arun C Murthy
>Assignee: Alejandro Abdelnur
> Fix For: 0.22.1, 2.0.3-alpha
>
> Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
> hadoop-7115-0.22.patch, HADOOP-7115.patch, HADOOP-7115.patch, 
> HADOOP-7115.patch, HADOOP-7115.patch
>
>
> As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9023) HttpFs is too restrictive on usernames

2012-11-09 Thread Harsh J (JIRA)
Harsh J created HADOOP-9023:
---

 Summary: HttpFs is too restrictive on usernames
 Key: HADOOP-9023
 URL: https://issues.apache.org/jira/browse/HADOOP-9023
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Harsh J


HttpFs tries to use UserProfile.USER_PATTERN to match all usernames before a 
doAs impersonation function. This regex is too strict for most usernames, as it 
disallows any special character at all. We should relax it more or ditch 
needing to match things there.

WebHDFS currently has no such limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-09 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494164#comment-13494164
 ] 

Tom White commented on HADOOP-7115:
---

+1 the latest patch looks good to me.

> Add a cache for getpwuid_r and getpwgid_r calls
> ---
>
> Key: HADOOP-7115
> URL: https://issues.apache.org/jira/browse/HADOOP-7115
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
>Reporter: Arun C Murthy
>Assignee: Alejandro Abdelnur
> Fix For: 0.22.1, 2.0.3-alpha
>
> Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
> hadoop-7115-0.22.patch, HADOOP-7115.patch, HADOOP-7115.patch, 
> HADOOP-7115.patch, HADOOP-7115.patch
>
>
> As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9006) Winutils should keep Administrators privileges intact

2012-11-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9006.
-

Resolution: Fixed

+1 for the change. Committed the patch to branch-1-win. Thank you Chuan.

> Winutils should keep Administrators privileges intact
> -
>
> Key: HADOOP-9006
> URL: https://issues.apache.org/jira/browse/HADOOP-9006
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Chuan Liu
>Assignee: Chuan Liu
>Priority: Minor
> Fix For: 1-win
>
> Attachments: HADOOP-9006-branch-1-win.patch
>
>
> This issue was originally discovered by [~ivanmi]. Cite his words as follows.
> {quote}
> Current by design behavior is for winutils to ACL the folders only for the 
> user passed in thru chmod/chown. This causes some un-natural side effects in 
> cases where Hadoop services run in the context of a non-admin user. For 
> example, Administrators on the box will no longer be able to:
>  - delete files created in the context of Hadoop services (other users)
>  - check the size of the folder where HDFS blocks are stored
> {quote}
> In my opinion, it is natural for some special accounts on Windows to be able 
> to access all the folders, including Hadoop folders. This is similar to Linux 
> in the way root users on Linux can always access any directories regardless 
> the permissions set the those directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2012-11-09 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494133#comment-13494133
 ] 

Luke Lu commented on HADOOP-8419:
-

Yu, Please click Submit patch to let jenkins to review the trunk patch.

> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, HADOOP-8419-trunk.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8615) EOFException in DecompressorStream.java needs to be more verbose

2012-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494126#comment-13494126
 ] 

Hadoop QA commented on HADOOP-8615:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12552853/HADOOP-8615-ver3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1726//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1726//console

This message is automatically generated.

> EOFException in DecompressorStream.java needs to be more verbose
> 
>
> Key: HADOOP-8615
> URL: https://issues.apache.org/jira/browse/HADOOP-8615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Jeff Lord
>  Labels: patch
> Attachments: HADOOP-8615.patch, HADOOP-8615-release-0.20.2.patch, 
> HADOOP-8615-ver2.patch, HADOOP-8615-ver3.patch
>
>
> In ./src/core/org/apache/hadoop/io/compress/DecompressorStream.java
> The following exception should at least pass back the file that it encounters 
> this error in relation to:
>   protected void getCompressedData() throws IOException {
> checkStream();
> int n = in.read(buffer, 0, buffer.length);
> if (n == -1) {
>   throw new EOFException("Unexpected end of input stream");
> }
> This would help greatly to debug bad/corrupt files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494118#comment-13494118
 ] 

Hadoop QA commented on HADOOP-7115:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552849/HADOOP-7115.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1725//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1725//console

This message is automatically generated.

> Add a cache for getpwuid_r and getpwgid_r calls
> ---
>
> Key: HADOOP-7115
> URL: https://issues.apache.org/jira/browse/HADOOP-7115
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
>Reporter: Arun C Murthy
>Assignee: Alejandro Abdelnur
> Fix For: 0.22.1, 2.0.3-alpha
>
> Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
> hadoop-7115-0.22.patch, HADOOP-7115.patch, HADOOP-7115.patch, 
> HADOOP-7115.patch, HADOOP-7115.patch
>
>
> As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494106#comment-13494106
 ] 

Hadoop QA commented on HADOOP-7115:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552849/HADOOP-7115.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1724//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1724//console

This message is automatically generated.

> Add a cache for getpwuid_r and getpwgid_r calls
> ---
>
> Key: HADOOP-7115
> URL: https://issues.apache.org/jira/browse/HADOOP-7115
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
>Reporter: Arun C Murthy
>Assignee: Alejandro Abdelnur
> Fix For: 0.22.1, 2.0.3-alpha
>
> Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
> hadoop-7115-0.22.patch, HADOOP-7115.patch, HADOOP-7115.patch, 
> HADOOP-7115.patch, HADOOP-7115.patch
>
>
> As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8615) EOFException in DecompressorStream.java needs to be more verbose

2012-11-09 Thread thomastechs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

thomastechs updated HADOOP-8615:



Resolved coding standard issues mentioned in the Andy's review

> EOFException in DecompressorStream.java needs to be more verbose
> 
>
> Key: HADOOP-8615
> URL: https://issues.apache.org/jira/browse/HADOOP-8615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Jeff Lord
>  Labels: patch
> Attachments: HADOOP-8615.patch, HADOOP-8615-release-0.20.2.patch, 
> HADOOP-8615-ver2.patch, HADOOP-8615-ver3.patch
>
>
> In ./src/core/org/apache/hadoop/io/compress/DecompressorStream.java
> The following exception should at least pass back the file that it encounters 
> this error in relation to:
>   protected void getCompressedData() throws IOException {
> checkStream();
> int n = in.read(buffer, 0, buffer.length);
> if (n == -1) {
>   throw new EOFException("Unexpected end of input stream");
> }
> This would help greatly to debug bad/corrupt files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8615) EOFException in DecompressorStream.java needs to be more verbose

2012-11-09 Thread thomastechs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

thomastechs updated HADOOP-8615:


Attachment: HADOOP-8615-ver3.patch

New patch incorporated with Andy's fix

> EOFException in DecompressorStream.java needs to be more verbose
> 
>
> Key: HADOOP-8615
> URL: https://issues.apache.org/jira/browse/HADOOP-8615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Jeff Lord
>  Labels: patch
> Attachments: HADOOP-8615.patch, HADOOP-8615-release-0.20.2.patch, 
> HADOOP-8615-ver2.patch, HADOOP-8615-ver3.patch
>
>
> In ./src/core/org/apache/hadoop/io/compress/DecompressorStream.java
> The following exception should at least pass back the file that it encounters 
> this error in relation to:
>   protected void getCompressedData() throws IOException {
> checkStream();
> int n = in.read(buffer, 0, buffer.length);
> if (n == -1) {
>   throw new EOFException("Unexpected end of input stream");
> }
> This would help greatly to debug bad/corrupt files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8615) EOFException in DecompressorStream.java needs to be more verbose

2012-11-09 Thread thomastechs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494098#comment-13494098
 ] 

thomastechs commented on HADOOP-8615:
-

Thanks Andy for the review. I am incorporating your review comments and 
attching the new patch.

> EOFException in DecompressorStream.java needs to be more verbose
> 
>
> Key: HADOOP-8615
> URL: https://issues.apache.org/jira/browse/HADOOP-8615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Jeff Lord
>  Labels: patch
> Attachments: HADOOP-8615.patch, HADOOP-8615-release-0.20.2.patch, 
> HADOOP-8615-ver2.patch
>
>
> In ./src/core/org/apache/hadoop/io/compress/DecompressorStream.java
> The following exception should at least pass back the file that it encounters 
> this error in relation to:
>   protected void getCompressedData() throws IOException {
> checkStream();
> int n = in.read(buffer, 0, buffer.length);
> if (n == -1) {
>   throw new EOFException("Unexpected end of input stream");
> }
> This would help greatly to debug bad/corrupt files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9006) Winutils should keep Administrators privileges intact

2012-11-09 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494097#comment-13494097
 ] 

Ivan Mitic commented on HADOOP-9006:


Thanks Chuan.

I already reviewed this patch, so +1 from me.

The change builds fine and your test passes.

> Winutils should keep Administrators privileges intact
> -
>
> Key: HADOOP-9006
> URL: https://issues.apache.org/jira/browse/HADOOP-9006
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Chuan Liu
>Assignee: Chuan Liu
>Priority: Minor
> Fix For: 1-win
>
> Attachments: HADOOP-9006-branch-1-win.patch
>
>
> This issue was originally discovered by [~ivanmi]. Cite his words as follows.
> {quote}
> Current by design behavior is for winutils to ACL the folders only for the 
> user passed in thru chmod/chown. This causes some un-natural side effects in 
> cases where Hadoop services run in the context of a non-admin user. For 
> example, Administrators on the box will no longer be able to:
>  - delete files created in the context of Hadoop services (other users)
>  - check the size of the folder where HDFS blocks are stored
> {quote}
> In my opinion, it is natural for some special accounts on Windows to be able 
> to access all the folders, including Hadoop folders. This is similar to Linux 
> in the way root users on Linux can always access any directories regardless 
> the permissions set the those directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-09 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-7115:
---

Attachment: HADOOP-7115.patch

minor udpate, adding private/unstable annotations to the NativeIO.java class

> Add a cache for getpwuid_r and getpwgid_r calls
> ---
>
> Key: HADOOP-7115
> URL: https://issues.apache.org/jira/browse/HADOOP-7115
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
>Reporter: Arun C Murthy
>Assignee: Alejandro Abdelnur
> Fix For: 0.22.1, 2.0.3-alpha
>
> Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
> hadoop-7115-0.22.patch, HADOOP-7115.patch, HADOOP-7115.patch, 
> HADOOP-7115.patch, HADOOP-7115.patch
>
>
> As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-09 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-7115:
---

Attachment: HADOOP-7115.patch

New patch addressing Todd's last 2 comments 

> Add a cache for getpwuid_r and getpwgid_r calls
> ---
>
> Key: HADOOP-7115
> URL: https://issues.apache.org/jira/browse/HADOOP-7115
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
>Reporter: Arun C Murthy
>Assignee: Alejandro Abdelnur
> Fix For: 0.22.1, 2.0.3-alpha
>
> Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
> hadoop-7115-0.22.patch, HADOOP-7115.patch, HADOOP-7115.patch, 
> HADOOP-7115.patch
>
>
> As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8860) Split MapReduce and YARN sections in documentation navigation

2012-11-09 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-8860:
--

Attachment: HADOOP-8860.patch
HADOOP-8860.sh

Sorry about that. Here's an updated version.

> Split MapReduce and YARN sections in documentation navigation
> -
>
> Key: HADOOP-8860
> URL: https://issues.apache.org/jira/browse/HADOOP-8860
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.0.1-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-8860.patch, HADOOP-8860.patch, HADOOP-8860.sh, 
> HADOOP-8860.sh
>
>
> This JIRA is to change the navigation on 
> http://hadoop.apache.org/docs/r2.0.1-alpha/ to reflect the fact that 
> MapReduce and YARN are separate modules/sub-projects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8974) TestDFVariations fails on Windows

2012-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494008#comment-13494008
 ] 

Hudson commented on HADOOP-8974:


Integrated in Hadoop-Mapreduce-trunk #1251 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1251/])
HADOOP-8974. TestDFVariations fails on Windows. Contributed by Chris 
Nauroth. (Revision 1407222)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1407222
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDFVariations.java


> TestDFVariations fails on Windows
> -
>
> Key: HADOOP-8974
> URL: https://issues.apache.org/jira/browse/HADOOP-8974
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, trunk-win
>
> Attachments: HADOOP-8974.patch
>
>
> The test fails on Windows.  This may be related to code ported in to DF.java 
> from branch-1-win.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8974) TestDFVariations fails on Windows

2012-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493982#comment-13493982
 ] 

Hudson commented on HADOOP-8974:


Integrated in Hadoop-Hdfs-trunk #1221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1221/])
HADOOP-8974. TestDFVariations fails on Windows. Contributed by Chris 
Nauroth. (Revision 1407222)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1407222
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDFVariations.java


> TestDFVariations fails on Windows
> -
>
> Key: HADOOP-8974
> URL: https://issues.apache.org/jira/browse/HADOOP-8974
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, trunk-win
>
> Attachments: HADOOP-8974.patch
>
>
> The test fails on Windows.  This may be related to code ported in to DF.java 
> from branch-1-win.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2012-11-09 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493966#comment-13493966
 ] 

Yu Li commented on HADOOP-8419:
---

Test result on trunk:

Both with and w/o my patch, below UT case failed, but from error message it 
should be irrelavant with compression:
===
Tests in error:
  testRDNS(org.apache.hadoop.net.TestDNS): DNS server failure [response code 2]

Tests run: 1784, Failures: 0, Errors: 1, Skipped: 18

[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main  SUCCESS [1.702s]
[INFO] Apache Hadoop Project POM . SUCCESS [3.812s]
[INFO] Apache Hadoop Annotations . SUCCESS [1.312s]
[INFO] Apache Hadoop Project Dist POM  SUCCESS [0.245s]
[INFO] Apache Hadoop Assemblies .. SUCCESS [0.335s]
[INFO] Apache Hadoop Auth  SUCCESS [6.754s]
[INFO] Apache Hadoop Auth Examples ... SUCCESS [0.322s]
[INFO] Apache Hadoop Common .. FAILURE [16:42.921s]
[INFO] Apache Hadoop Common Project .. SKIPPED
===

>From the UT log we could see below error message:
===
Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 15.459 sec <<< 
FAILURE!
testRDNS(org.apache.hadoop.net.TestDNS)  Time elapsed: 15233 sec  <<< ERROR!
javax.naming.ServiceUnavailableException: DNS server failure [response code 2]; 
remaining name '81.122.30.9.in-addr.arpa'
at com.sun.jndi.dns.DnsClient.checkResponseCode(DnsClient.java:594)
at com.sun.jndi.dns.DnsClient.isMatchResponse(DnsClient.java:553)
===

> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, HADOOP-8419-trunk.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2012-11-09 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493965#comment-13493965
 ] 

Yu Li commented on HADOOP-8419:
---

Test result on branch-1:

Both with and w/o my patch, below UT cases failed, not sure whether it's env 
issue, but from error message it should be irrelavant with compression:

[junit] Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
[junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 2.067 sec
[junit] Running org.apache.hadoop.hdfs.TestRestartDFS
[junit] Tests run: 2, Failures: 0, Errors: 2, Time elapsed: 16.016 sec
[junit] Running org.apache.hadoop.hdfs.TestSafeMode
[junit] Tests run: 3, Failures: 0, Errors: 2, Time elapsed: 64.601 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
[junit] Tests run: 3, Failures: 0, Errors: 3, Time elapsed: 41.901 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup
[junit] Tests run: 3, Failures: 2, Errors: 0, Time elapsed: 44.583 sec


All cases with error has error messages like:
===
Edit log corruption detected: corruption length = 9748 > toleration length = 0; 
the corruption is intolerable.
java.io.IOException: Edit log corruption detected: corruption length = 9748 > 
toleration length = 0; the corruption is intolerable.
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkEndOfLog(FSEditLog.java:608)
===

The case with failure has error message like:
===
java.io.IOException: Failed to parse edit log 
(/home/biadmin/hadoop/build/test/data/dfs/chkpt/current/edits) at position 555, 
edit log length is 690, opcode=0, isTolerationEnabled=false, Rec
ent opcode offsets=[65 124 244 388]
at 
org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:84)
===

> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, HADOOP-8419-trunk.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2012-11-09 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493960#comment-13493960
 ] 

Yu Li commented on HADOOP-8419:
---

The result of test-patch:

{color:green}+1 overall{color}.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.1) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.


> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, HADOOP-8419-trunk.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2012-11-09 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HADOOP-8419:
--

Attachment: HADOOP-8419-trunk.patch

> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, HADOOP-8419-trunk.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-11-09 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493910#comment-13493910
 ] 

Uma Maheswara Rao G commented on HADOOP-8240:
-

@Kihwal, 
 
 I have created a file with checksum disable option and I am seeing 
ArrayIndexOutOfBoundsException.

{code}
out = fs.create(fileName, FsPermission.getDefault(), flags, fs.getConf()
  .getInt("io.file.buffer.size", 4096), replFactor, fs
  .getDefaultBlockSize(fileName), null, ChecksumOpt.createDisabled());
{code}

See the trace here:

{noformat}
java.lang.ArrayIndexOutOfBoundsException: 0
at org.apache.hadoop.fs.FSOutputSummer.int2byte(FSOutputSummer.java:178)
at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:162)
at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:106)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:92)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:261)
at 
org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:174)
{noformat}

Whether I have missed any other configs to set?


In FSOutputSummer#int2byte will not check any bytes length, so, do you think we 
have to to check the length then only we call this in CRC NULL case, as there 
will not be any checksum bytes?

{code}
static byte[] int2byte(int integer, byte[] bytes) {
bytes[0] = (byte)((integer >>> 24) & 0xFF);
bytes[1] = (byte)((integer >>> 16) & 0xFF);
bytes[2] = (byte)((integer >>>  8) & 0xFF);
bytes[3] = (byte)((integer >>>  0) & 0xFF);
return bytes;
  }
{code}

Another point is, If I create any file with ChecksumOpt.createDisabled, there 
is no point of doing block scan on that block in DN, because it can never 
detect that block as corrupt as there will not be any CRC bytes. Unnecessary 
block read will happen via BlockScan for no pupose. I am not sure I understand 
this JIRA wrongly, please correct me if I am wrong.



> Allow users to specify a checksum type on create()
> --
>
> Key: HADOOP-8240
> URL: https://issues.apache.org/jira/browse/HADOOP-8240
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 0.23.3, 2.0.2-alpha
>
> Attachments: hadoop-8240-branch-0.23-alone.patch.txt, 
> hadoop-8240.patch, hadoop-8240-post-hadoop-8700-br2-trunk.patch.txt, 
> hadoop-8240-post-hadoop-8700-br2-trunk.patch.txt, 
> hadoop-8240-trunk-branch2.patch.txt, hadoop-8240-trunk-branch2.patch.txt, 
> hadoop-8240-trunk-branch2.patch.txt
>
>
> Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
> create() is needed. The way FileSystem cache works makes it impossible to use 
> dfs.checksum.type to achieve this. Also checksum-related API is at 
> Filesystem-level, so we prefer something at that level, not hdfs-specific 
> one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8974) TestDFVariations fails on Windows

2012-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493901#comment-13493901
 ] 

Hudson commented on HADOOP-8974:


Integrated in Hadoop-Yarn-trunk #31 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/31/])
HADOOP-8974. TestDFVariations fails on Windows. Contributed by Chris 
Nauroth. (Revision 1407222)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1407222
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDFVariations.java


> TestDFVariations fails on Windows
> -
>
> Key: HADOOP-8974
> URL: https://issues.apache.org/jira/browse/HADOOP-8974
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, trunk-win
>
> Attachments: HADOOP-8974.patch
>
>
> The test fails on Windows.  This may be related to code ported in to DF.java 
> from branch-1-win.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira