[jira] [Commented] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-01-11 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551709#comment-13551709
 ] 

Eli Collins commented on HADOOP-8816:
-

+1  looks good

How about adding a comment to the test where you check the 63kb length that the 
buffer size set is for ALL headers which is why you're only adding a 63kb 
header when the limit is 64kb (leaving 1kb room for other headers).  No need to 
spin a new patch for just adding this comment IMO.

> HTTP Error 413 full HEAD if using kerberos authentication
> -
>
> Key: HADOOP-8816
> URL: https://issues.apache.org/jira/browse/HADOOP-8816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.0.1-alpha
> Environment: ubuntu linux with active directory kerberos.
>Reporter: Moritz Moeller
>Assignee: Moritz Moeller
> Attachments: HADOOP-8816.patch, 
> hadoop-common-kerberos-increase-http-header-buffer-size.patch
>
>
> The HTTP Authentication: header is too large if using kerberos and the 
> request is rejected by Jetty because Jetty has a too low default header size 
> limit.
> Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
> org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-01-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551707#comment-13551707
 ] 

Hadoop QA commented on HADOOP-8816:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564524/HADOOP-8816.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2031//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2031//console

This message is automatically generated.

> HTTP Error 413 full HEAD if using kerberos authentication
> -
>
> Key: HADOOP-8816
> URL: https://issues.apache.org/jira/browse/HADOOP-8816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.0.1-alpha
> Environment: ubuntu linux with active directory kerberos.
>Reporter: Moritz Moeller
>Assignee: Moritz Moeller
> Attachments: HADOOP-8816.patch, 
> hadoop-common-kerberos-increase-http-header-buffer-size.patch
>
>
> The HTTP Authentication: header is too large if using kerberos and the 
> request is rejected by Jetty because Jetty has a too low default header size 
> limit.
> Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
> org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-01-11 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8816:
---

Status: Patch Available  (was: Open)

> HTTP Error 413 full HEAD if using kerberos authentication
> -
>
> Key: HADOOP-8816
> URL: https://issues.apache.org/jira/browse/HADOOP-8816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.0.1-alpha
> Environment: ubuntu linux with active directory kerberos.
>Reporter: Moritz Moeller
>Assignee: Moritz Moeller
> Attachments: HADOOP-8816.patch, 
> hadoop-common-kerberos-increase-http-header-buffer-size.patch
>
>
> The HTTP Authentication: header is too large if using kerberos and the 
> request is rejected by Jetty because Jetty has a too low default header size 
> limit.
> Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
> org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-01-11 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur reassigned HADOOP-8816:
--

Assignee: Moritz Moeller

> HTTP Error 413 full HEAD if using kerberos authentication
> -
>
> Key: HADOOP-8816
> URL: https://issues.apache.org/jira/browse/HADOOP-8816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.0.1-alpha
> Environment: ubuntu linux with active directory kerberos.
>Reporter: Moritz Moeller
>Assignee: Moritz Moeller
> Attachments: HADOOP-8816.patch, 
> hadoop-common-kerberos-increase-http-header-buffer-size.patch
>
>
> The HTTP Authentication: header is too large if using kerberos and the 
> request is rejected by Jetty because Jetty has a too low default header size 
> limit.
> Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
> org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-01-11 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8816:
---

Attachment: HADOOP-8816.patch

I've just added a testcase to Moritz' patch.

> HTTP Error 413 full HEAD if using kerberos authentication
> -
>
> Key: HADOOP-8816
> URL: https://issues.apache.org/jira/browse/HADOOP-8816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.0.1-alpha
> Environment: ubuntu linux with active directory kerberos.
>Reporter: Moritz Moeller
>Assignee: Moritz Moeller
> Attachments: HADOOP-8816.patch, 
> hadoop-common-kerberos-increase-http-header-buffer-size.patch
>
>
> The HTTP Authentication: header is too large if using kerberos and the 
> request is rejected by Jetty because Jetty has a too low default header size 
> limit.
> Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
> org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8498) Hadoop-1.0.3 didnt publish sources.jar to maven

2013-01-11 Thread Christopher Tubbs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551617#comment-13551617
 ] 

Christopher Tubbs commented on HADOOP-8498:
---

This doesn't appear to be done for any hadoop-core artifacts. Javadoc artifacts 
are also not published. See [the search 
results|http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.apache.hadoop%22%20AND%20a%3A%22hadoop-core%22].
 It really would be great to have this for future and past Hadoop releases.

> Hadoop-1.0.3 didnt publish sources.jar to maven
> ---
>
> Key: HADOOP-8498
> URL: https://issues.apache.org/jira/browse/HADOOP-8498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.3
>Reporter: ryan rawson
>Priority: Minor
>
> on search.maven.org, only the JAR and POM for hadoop was published.  
> Sources.jar should also be published.  This helps developers who are writing 
> on top of hadoop, it allows their IDE to provide fully seamless and 
> integrated source browsing (and javadoc) without taking extra steps.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551565#comment-13551565
 ] 

Hadoop QA commented on HADOOP-8924:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564490/HADOOP-8924.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-assemblies hadoop-common-project/hadoop-common hadoop-maven-plugins 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2030//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2030//console

This message is automatically generated.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924-branch-trunk-win.2.patch, 
> HADOOP-8924-branch-trunk-win.3.patch, HADOOP-8924-branch-trunk-win.patch, 
> HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9197) Some little confusion in official documentation

2013-01-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551547#comment-13551547
 ] 

Suresh Srinivas commented on HADOOP-9197:
-

Also, you are better off reading 1.x release related documents instead of going 
to very old releases.

> Some little confusion in official documentation
> ---
>
> Key: HADOOP-9197
> URL: https://issues.apache.org/jira/browse/HADOOP-9197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jason Lee
>Priority: Trivial
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am just a newbie to Hadoop. recently i self-study hadoop. when i reading 
> the official documentations, i find that them is a little confusion by 
> beginners like me. for example, look at the documents about HDFS shell guide:
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
> As a beginner, i think reading them is suffering.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9197) Some little confusion in official documentation

2013-01-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551543#comment-13551543
 ] 

Suresh Srinivas commented on HADOOP-9197:
-

bq. As a beginner, i think reading them is suffering.

:-)

Can you add more details on why the documentation seemed confusing, not clear, 
or hard to read, instead of just saying it needs improvement? Better still,  
you can also post improvements to the documentation. Just saying "little 
confusion in official documentation" is not sufficient to address the issue.

> Some little confusion in official documentation
> ---
>
> Key: HADOOP-9197
> URL: https://issues.apache.org/jira/browse/HADOOP-9197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jason Lee
>Priority: Trivial
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am just a newbie to Hadoop. recently i self-study hadoop. when i reading 
> the official documentations, i find that them is a little confusion by 
> beginners like me. for example, look at the documents about HDFS shell guide:
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
> As a beginner, i think reading them is suffering.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-11 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551542#comment-13551542
 ] 

Alejandro Abdelnur commented on HADOOP-8924:


patch LGTM (pending Jenkins), but somebody else should +1 as I'm partially 
responsible for it.

One thing I've forgot to mention is that currently (and with this patch) MD5 
are done only for the sources in common and in yarn. And the VersionInfo from 
common is used in hdfs. IMO, we should either have a global MD5 & VersionInfo 
for the whole project or one per module. This is out of scope of this JIRA, 
just wanted to bring it up.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924-branch-trunk-win.2.patch, 
> HADOOP-8924-branch-trunk-win.3.patch, HADOOP-8924-branch-trunk-win.patch, 
> HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Attachment: HADOOP-8924.3.patch

Re-uploading trunk patch so that Jenkins picks up the right file.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924-branch-trunk-win.2.patch, 
> HADOOP-8924-branch-trunk-win.3.patch, HADOOP-8924-branch-trunk-win.patch, 
> HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551509#comment-13551509
 ] 

Hadoop QA commented on HADOOP-8924:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12564482/HADOOP-8924-branch-trunk-win.3.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2029//console

This message is automatically generated.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Status: Patch Available  (was: Reopened)

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9201) Trash can get Namespace collision

2013-01-11 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-9201:
---

 Summary: Trash can get Namespace collision
 Key: HADOOP-9201
 URL: https://issues.apache.org/jira/browse/HADOOP-9201
 Project: Hadoop Common
  Issue Type: Bug
  Components: trash
Affects Versions: 0.23.5, 2.0.2-alpha, 1.0.2
Reporter: Robert Joseph Evans


{noformat}
$ hadoop fs -touchz test
$ hadoop fs -rm test
Moved: 'hdfs://nn:8020/user/ME/test' to trash at: 
hdfs://nn:8020/user/ME/.Trash/Current
$ hadoop fs -mkdir test
$ hadoop fs -touchz test/1
$ hadoop fs -rm test/1
WARN fs.TrashPolicyDefault: Can't create trash directory: 
hdfs://nn:8020/user/ME/.Trash/Current/user/ME/test
rm: Failed to move to trash: hdfs://nn:8020/user/ME/test/1. Consider using 
-skipTrash option
{noformat}

On 1.0.2 it looks more like
{noformat}
 WARN fs.Trash: Can't create trash directory: 
hdfs://nn:8020/user/ME/.Trash/Current/user/ME/test
Problem with Trash.java.io.FileNotFoundException: Parent path is not a 
directory: /user/ME/.Trash/Current/user/ME/test
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.mkdirs(FSDirectory.java:949)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2069)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2030)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.mkdirs(NameNode.java:817)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
. Consider using -skipTrash option
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Attachment: HADOOP-8924-branch-trunk-win.3.patch
HADOOP-8924.3.patch

I'm attaching version 3 of the patch for trunk and branch-trunk-win.  The 
differences since last time are that *.proto is included in the fileset for the 
MD5 calculation (see pom.xml files), and the MD5 calculation uses a 
platform-independent sort order for processing the files (see 
{{VersionInfoMojo#computeMD5}}).  The sort logic is a port of the earlier 
Python code on branch-trunk-win.

I've tested this on Mac, Windows, and Ubuntu.  MD5 calculations are consistent 
across platforms (though intentionally different from saveVersion.sh for the 
reasons discussed above).

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-01-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551466#comment-13551466
 ] 

Colin Patrick McCabe commented on HADOOP-9194:
--

Interesting stuff, guys.  many good points have been brought up.

The code path for UNIX domain sockets is like this now:
1. server calls accept(), gets a socket, hands it off to worker thread
2. worker thread reads the RPC header to find the type of message and length
3. worker thread reads the message
4. worker thread processes the message

Having QoS information in the header would allow us to prioritize the message 
after step #2.
Having QoS information in the protobuf would allow us to prioritize the message 
after step #3.

Since messages are normally just a few bytes, I'm not sure that this would be a 
big win.

In general, I think using a separate UNIX domain socket would probably make 
more sense.  It would also allow us to use operating system features like the 
accept backlog to our advantage-- when using a single socket, we have to 
implement all that ourselves, and we don't really have the tools in userspace 
to do a good job.

> RPC Support for QoS
> ---
>
> Key: HADOOP-9194
> URL: https://issues.apache.org/jira/browse/HADOOP-9194
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Affects Versions: 2.0.2-alpha
>Reporter: Luke Lu
>
> One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
> We need QoS support to fight the inevitable "buffer bloat" (including various 
> queues, which are probably necessary for throughput) in our software stack. 
> This is important for mixed workload with different latency and throughput 
> requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
> same DFS.
> Any potential bottleneck will need to be managed by QoS mechanisms, starting 
> with RPC. 
> How about adding a one byte DS (differentiated services) field (a la the 
> 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
> mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
> the header is helpful for implementing high performance QoS mechanisms in 
> switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8594) Fix issues identified by findsbugs2

2013-01-11 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8594:


Summary: Fix issues identified by findsbugs2  (was: Upgrade to findbugs 2)

> Fix issues identified by findsbugs2
> ---
>
> Key: HADOOP-8594
> URL: https://issues.apache.org/jira/browse/HADOOP-8594
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
> Attachments: findbugs-2-html-reports.tar.gz, findbugs.html, 
> findbugs.out.17.html, findbugs.out.18.html, findbugs.out.19.html, 
> findbugs.out.20.html, findbugs.out.21.html, findbugs.out.22.html, 
> findbugs.out.23.html, findbugs.out.27.html, findbugs.out.28.html, 
> findbugs.out.29.html, HADOOP-8594.patch
>
>
> Harsh recently ran findbugs 2 (instead of 1.3.9 which is what jenkins runs) 
> and it showed thousands of warnings (they've made a lot of progress in 
> findbugs releases). We should upgrade to findbugs 2 and fix these. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551416#comment-13551416
 ] 

Chris Nauroth commented on HADOOP-8924:
---

Line endings on the .proto files were consistent across platforms.  The .java 
files generated by protoc had Windows line endings though.

I'll add *.proto to the fileset in my next version of the patch.


> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.patch, 
> HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9200) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache

2013-01-11 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9200:
---

Attachment: HADOOP-9200-trunk.patch

The patch HADOOP-9200 is applicable to all 3 branches: trunk, branch-2, 
branch-0.23.

> enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache
> 
>
> Key: HADOOP-9200
> URL: https://issues.apache.org/jira/browse/HADOOP-9200
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9200-trunk.patch
>
>
> The class org.apache.hadoop.security.NetgroupCache has poor unit-test 
> coverage. Enhance it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551409#comment-13551409
 ] 

Hadoop QA commented on HADOOP-9097:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12564474/HADOOP-9097-branch-0.23-entire.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2028//console

This message is automatically generated.

> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
> HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-11 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551408#comment-13551408
 ] 

Alejandro Abdelnur commented on HADOOP-8924:


on generated files, I assume you refer to the protoc files, we could add the 
protoc sources the MD5.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.patch, 
> HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-11 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

Attachment: HADOOP-9097-branch-0.23-entire.patch
HADOOP-9097-branch-0.23.patch

upload corresponding branch-0.23 patches.

> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
> HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-01-11 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551402#comment-13551402
 ] 

Eli Collins commented on HADOOP-9194:
-

Agree, though QoS requires more than just RPC support.

> RPC Support for QoS
> ---
>
> Key: HADOOP-9194
> URL: https://issues.apache.org/jira/browse/HADOOP-9194
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Affects Versions: 2.0.2-alpha
>Reporter: Luke Lu
>
> One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
> We need QoS support to fight the inevitable "buffer bloat" (including various 
> queues, which are probably necessary for throughput) in our software stack. 
> This is important for mixed workload with different latency and throughput 
> requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
> same DFS.
> Any potential bottleneck will need to be managed by QoS mechanisms, starting 
> with RPC. 
> How about adding a one byte DS (differentiated services) field (a la the 
> 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
> mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
> the header is helpful for implementing high performance QoS mechanisms in 
> switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551397#comment-13551397
 ] 

Chris Nauroth commented on HADOOP-8924:
---

Thanks, Matt and Alejandro.  I'm seeing differences between MD5s on Mac vs. 
Windows due to file sorting differences.  I'm working on an updated patch to 
address it.

One other difference between this and saveVersion.sh is that saveVersion.sh 
included generated-sources, but this plugin won't, because we're binding it to 
the initialize phase.  During the Python port, we chose to exclude 
generated-sources, because we can't guarantee that generated stuff (i.e. 
protoc) will generate code with the same line endings regardless of platform.  
This had been causing different MD5s on Windows, so we changed the Python 
script to skip generated-sources.  I think skipping generated-sources is fine.  
I'm just mentioning it here so that everyone is aware of the difference.

Matt, I was planning on testing Mac, Windows, and Ubuntu, because that's what I 
have access to right now.  Is there any particular reason to retest on RHEL5 
and RHEL6 in addition to that?  That probably would have been important for the 
Python script, because RHEL distributions are tightly coupled to a particular 
Python version, but I don't think it's relevant for the Java implementation.  
If you disagree, let me know, and I'll spin up some RHEL VMs.


> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.patch, 
> HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-11 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

Attachment: HADOOP-9097.patch
HADOOP-9097-entire.patch

Update common pom.xml to include the .idea/** and .git/**.

also upload the entire patch that includes that common change plus adding the 
tree.h license to hdfs LICENSE.txt.

> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
> HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-01-11 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551391#comment-13551391
 ] 

Luke Lu commented on HADOOP-9194:
-

You can use port to differentiate services for IP connections, which 
essentially communicates the service class out of band by convention. This is a 
reasonable hack for internal use (a la HDFS-599) due to lack of support in the 
RPC itself. Things quickly get out of hand if we have more service classes 
and/or different transport mechanisms without ports (say, again, unix domain 
socket (use another file naming convention?)) let alone support for proxy, load 
balancing and firewalls.

If we want use Hadoop RPC as general purpose DFS (or computing) client 
protocols, it needs to support QoS natively. Having well defined QoS semantics 
in the RPC also lends to common libraries of QoS algorithms that can be easily 
adopted at every necessary layer of our software stack.

> RPC Support for QoS
> ---
>
> Key: HADOOP-9194
> URL: https://issues.apache.org/jira/browse/HADOOP-9194
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Affects Versions: 2.0.2-alpha
>Reporter: Luke Lu
>
> One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
> We need QoS support to fight the inevitable "buffer bloat" (including various 
> queues, which are probably necessary for throughput) in our software stack. 
> This is important for mixed workload with different latency and throughput 
> requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
> same DFS.
> Any potential bottleneck will need to be managed by QoS mechanisms, starting 
> with RPC. 
> How about adding a one byte DS (differentiated services) field (a la the 
> 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
> mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
> the header is helpful for implementing high performance QoS mechanisms in 
> switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9200) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache

2013-01-11 Thread Ivan A. Veselovsky (JIRA)
Ivan A. Veselovsky created HADOOP-9200:
--

 Summary: enhance unit-test coverage of class 
org.apache.hadoop.security.NetgroupCache
 Key: HADOOP-9200
 URL: https://issues.apache.org/jira/browse/HADOOP-9200
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky


The class org.apache.hadoop.security.NetgroupCache has poor unit-test coverage. 
Enhance it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2013-01-11 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551385#comment-13551385
 ] 

Robert Joseph Evans commented on HADOOP-8849:
-

I am not sure that we actually want to do this change.  FileUtil.fullyDelete is 
called by RawLocalFileSystem.delete for recursive deletes.  With this change 
RawLocalFilesSystem will ignore permissions on recursive deletes if the user 
has the permission to change those permissions.  In all other places that I 
have seen the API used I think it is OK, but this is also a publicly visible 
API so I don't know who else this may cause problems with.  I would rather see 
a new API created separate from the original one, and the javadocs updated to 
explain the difference between the two APIs.  Perhaps something like

{code}
public static boolean fullyDelete(final File dir) {
  return fullyDelete(dir, false);
}

  /**
   * Delete a directory and all its contents.  If
   * we return false, the directory may be partially-deleted.
   * (1) If dir is symlink to a file, the symlink is deleted. The file pointed
   * to by the symlink is not deleted.
   * (2) If dir is symlink to a directory, symlink is deleted. The directory
   * pointed to by symlink is not deleted.
   * (3) If dir is a normal file, it is deleted.
   * (4) If dir is a normal directory, then dir and all its contents recursively
   * are deleted.
   * @param dir the file or directory to be deleted
   * @param tryUpdatePerms true if permissions should be modified to delete a 
file.
   * @return true on success false on failure.
   */
public static boolean fullyDelete(final File dir, boolean tryUpdatePerms) {
 ...
{code}

> FileUtil#fullyDelete should grant the target directories +rwx permissions 
> before trying to delete them
> --
>
> Key: HADOOP-8849
> URL: https://issues.apache.org/jira/browse/HADOOP-8849
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods 
> org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to 
> delete them.
> The mentioned methods fail to delete directories that don't have read or 
> execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short 
> timeout like tens of seconds), and the forked test process is killed, some 
> directories are left on disk that are not readable and/or executable. This 
> prevents next tests from being executed properly because these directories 
> cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
> So, its recommended to grant the read, write, and execute permissions the 
> directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
> return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
> this is not reliable because File#delete() returns true only if the file was 
> deleted as a result of the #delete() method invocation. E.g. in the following 
> code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and 
> "2", this fragment will return "false", while the file f does not exist upon 
> the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-11 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551382#comment-13551382
 ] 

Alejandro Abdelnur commented on HADOOP-8924:


Yeah, I'm good with this approach.

The MD5 computed by saveVersion.sh is:

{code}
srcChecksum=`find src/main/java -name '*.java' | LC_ALL=C sort | xargs md5sum | 
md5sum | cut -d ' ' -f 1`
{code}

This is, sort the files, MD5 on each file, and them MD5 on the MD5 output of 
all files.

The MD5 computed by the plugin is a single MD5 on the content of ALL files, 
sorted.

So the MD5s computed by the script and the plugin won't be the same. 

But the MD5 of the plugin should always be the same in diff platforms for the 
same source.


> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.patch, 
> HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext

2013-01-11 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9078:
---

Attachment: HADOOP-9078--b.patch

Re-attaching the patch for trunk to check the patch verification (previous 
verification took patch for branch-2). 

> enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
> 
>
> Key: HADOOP-9078
> URL: https://issues.apache.org/jira/browse/HADOOP-9078
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, 
> HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2.patch, HADOOP-9078.patch, 
> HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551373#comment-13551373
 ] 

Suresh Srinivas commented on HADOOP-9199:
-

One more quick comment looking at the patch - please add brief description to 
every test on what it is testing. Tests become harder to understand and 
maintain without this.

> Cover package org.apache.hadoop.io with unit tests
> --
>
> Key: HADOOP-9199
> URL: https://issues.apache.org/jira/browse/HADOOP-9199
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9199-branch-0.23-a.patch, 
> HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext

2013-01-11 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9078:
---

Attachment: (was: HADOOP-9078--b.patch)

> enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
> 
>
> Key: HADOOP-9078
> URL: https://issues.apache.org/jira/browse/HADOOP-9078
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, 
> HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2.patch, HADOOP-9078.patch, 
> HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551370#comment-13551370
 ] 

Suresh Srinivas commented on HADOOP-9199:
-

Can you please add a brief description of changes you are making in this patch?

> Cover package org.apache.hadoop.io with unit tests
> --
>
> Key: HADOOP-9199
> URL: https://issues.apache.org/jira/browse/HADOOP-9199
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9199-branch-0.23-a.patch, 
> HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551326#comment-13551326
 ] 

Hadoop QA commented on HADOOP-9199:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12564441/HADOOP-9199-trunk-a.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 16 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2026//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2026//console

This message is automatically generated.

> Cover package org.apache.hadoop.io with unit tests
> --
>
> Key: HADOOP-9199
> URL: https://issues.apache.org/jira/browse/HADOOP-9199
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9199-branch-0.23-a.patch, 
> HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-11 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551307#comment-13551307
 ] 

Tom White commented on HADOOP-9097:
---

Sorry I missed the remove script. That looks good to me.

> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9192) Move token related request/response messages to common

2013-01-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551283#comment-13551283
 ] 

Hudson commented on HADOOP-9192:


Integrated in Hadoop-trunk-Commit #3218 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3218/])
HADOOP-9192. Move token related request/response messages to common. 
Contributed by Suresh Srinivas. (Revision 1432158)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1432158
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/Security.proto


> Move token related request/response messages to common
> --
>
> Key: HADOOP-9192
> URL: https://issues.apache.org/jira/browse/HADOOP-9192
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9192.patch, HADOOP-9192.patch, HADOOP-9192.patch
>
>
> Get, Renew and Cancel delegation token requests and responses are repeated in 
> HDFS, Yarn and MR. This jira proposes to move these messages into 
> Security.proto in common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9192) Move token related request/response messages to common

2013-01-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9192:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Sid, thank you for the review. I have committed this change to trunk and 
branch-2.

> Move token related request/response messages to common
> --
>
> Key: HADOOP-9192
> URL: https://issues.apache.org/jira/browse/HADOOP-9192
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9192.patch, HADOOP-9192.patch, HADOOP-9192.patch
>
>
> Get, Renew and Cancel delegation token requests and responses are repeated in 
> HDFS, Yarn and MR. This jira proposes to move these messages into 
> Security.proto in common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-11 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9199:
---

Fix Version/s: (was: 0.23.6)
   (was: 2.0.3-alpha)
   (was: 3.0.0)

> Cover package org.apache.hadoop.io with unit tests
> --
>
> Key: HADOOP-9199
> URL: https://issues.apache.org/jira/browse/HADOOP-9199
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9199-branch-0.23-a.patch, 
> HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-11 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9199:
---

Fix Version/s: 0.23.6
   2.0.3-alpha
   3.0.0
   Status: Patch Available  (was: Open)

> Cover package org.apache.hadoop.io with unit tests
> --
>
> Key: HADOOP-9199
> URL: https://issues.apache.org/jira/browse/HADOOP-9199
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9199-branch-0.23-a.patch, 
> HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9139) improve script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh

2013-01-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551271#comment-13551271
 ] 

Hudson commented on HADOOP-9139:


Integrated in Hadoop-trunk-Commit #3217 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3217/])
HADOOP-9139 improve killKdc.sh (Ivan A. Veselovsky via bobby) (Revision 
1432151)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1432151
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh


> improve script 
> hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh
> 
>
> Key: HADOOP-9139
> URL: https://issues.apache.org/jira/browse/HADOOP-9139
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-9139--b.patch, HADOOP-9139.patch
>
>
> Script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh 
> is used in "internal" Kerberos tests to kill started apacheds server.
> There are 2 problems in the script:
> 1) it invokes "kill" even if there are no running apacheds servers;
> 2) it does not work correctly on all Linux platforms since "cut -f4 -d ' '" 
> command relies upon the exact number of spaces in the ps potput, but this 
> number can be different.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-11 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9199:
---

Attachment: HADOOP-9199-trunk-a.patch
HADOOP-9199-branch-2-a.patch
HADOOP-9199-branch-0.23-a.patch

> Cover package org.apache.hadoop.io with unit tests
> --
>
> Key: HADOOP-9199
> URL: https://issues.apache.org/jira/browse/HADOOP-9199
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9199-branch-0.23-a.patch, 
> HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9198) Update Flume Wiki and User Guide to provide clearer explanation of BatchSize, ChannelCapacity and ChannelTransactionCapacity properties.

2013-01-11 Thread Jeff Lord (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Lord resolved HADOOP-9198.
---

Resolution: Fixed

This is a flume issue and will be moved to that jira accordingly.

> Update Flume Wiki and User Guide to provide clearer explanation of BatchSize, 
> ChannelCapacity and ChannelTransactionCapacity properties.
> 
>
> Key: HADOOP-9198
> URL: https://issues.apache.org/jira/browse/HADOOP-9198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jeff Lord
>
> It would be good if we refined our wiki and user guide to help explain the 
> following in a more clear fashion:
> 1) Batch Size 
>   1.a) When configured by client code using the flume-core-sdk , to send 
> events to flume avro source.
> The flume client sdk has an appendBatch method. This will take a list of 
> events and send them to the source as a batch. This is the size of the number 
> of events to be passed to the source at one time.
>   1.b) When set as a parameter on HDFS sink (or other sinks which support 
> BatchSize parameter)
> This is the number of events written to file before it is flushed to HDFS
> 2)
>   2.a) Channel Capacity
> This is the maximum capacity number of events of the channel.
>   2.b) Channel Transaction Capacity.
> This is the max number of events stored in the channel per transaction.
> How will setting these parameters to different values, affect throughput, 
> latency in event flow?
> In general you will see better throughput by using memory channel as opposed 
> to using file channel at the loss of durability.
> The channel capacity is going to need to be sized such that it is large 
> enough to hold as many events as will be added to it by upstream agents. 
> Ideal flow would see the sink draining events from the channel faster than it 
> is having events added by its source.
> The channel transaction capacity will need to be smaller than the channel 
> capacity.
> e.g. If your Channel capacity is set to 1 than Channel Transaction 
> Capacity should be set to something like 100.
> Specifically if we have clients with varying frequency of event generation, 
> i.e. some clients generating thousands of events/sec, while
> others at a much slower rate, what effect will different values of these 
> params have on these clients ?
> Transaction Capacity is going to be what throttles or limits how many events 
> the source can put into the channel. This going to vary depending on how many 
> tiers of agents/collectors you have setup.
> In general though this should probably be equal to whatever you have the 
> batch size set to in your client.
> With regards to the hdfs batch size, the larger your batch size the better 
> performance will be. However, keep in mind that if a transaction fails the 
> entire transaction will be replayed which could have the implication of 
> duplicate events downstream.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9139) improve script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh

2013-01-11 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9139:


   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Ivan for the patch.  I put this into trunk.

> improve script 
> hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh
> 
>
> Key: HADOOP-9139
> URL: https://issues.apache.org/jira/browse/HADOOP-9139
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-9139--b.patch, HADOOP-9139.patch
>
>
> Script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh 
> is used in "internal" Kerberos tests to kill started apacheds server.
> There are 2 problems in the script:
> 1) it invokes "kill" even if there are no running apacheds servers;
> 2) it does not work correctly on all Linux platforms since "cut -f4 -d ' '" 
> command relies upon the exact number of spaces in the ps potput, but this 
> number can be different.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-11 Thread Vadim Bondarev (JIRA)
Vadim Bondarev created HADOOP-9199:
--

 Summary: Cover package org.apache.hadoop.io with unit tests
 Key: HADOOP-9199
 URL: https://issues.apache.org/jira/browse/HADOOP-9199
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-11 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551257#comment-13551257
 ] 

Thomas Graves commented on HADOOP-9097:
---

Thanks Tom. The empty files are removed via the HADOOP-9097-remove.sh script I 
attached.  I could probably make those scripts a bit better as it just does svn 
rm .

I'll add the tree.h license into hdfs. I'll also add the .git and .idea to top 
level.

> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9139) improve script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh

2013-01-11 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551256#comment-13551256
 ] 

Robert Joseph Evans commented on HADOOP-9139:
-

The change looks fine to me. +1 I'll check it in.

> improve script 
> hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh
> 
>
> Key: HADOOP-9139
> URL: https://issues.apache.org/jira/browse/HADOOP-9139
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-9139--b.patch, HADOOP-9139.patch
>
>
> Script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh 
> is used in "internal" Kerberos tests to kill started apacheds server.
> There are 2 problems in the script:
> 1) it invokes "kill" even if there are no running apacheds servers;
> 2) it does not work correctly on all Linux platforms since "cut -f4 -d ' '" 
> command relies upon the exact number of spaces in the ps potput, but this 
> number can be different.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9198) Update Flume Wiki and User Guide to provide clearer explanation of BatchSize, ChannelCapacity and ChannelTransactionCapacity properties.

2013-01-11 Thread Jeff Lord (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Lord updated HADOOP-9198:
--

Component/s: documentation

> Update Flume Wiki and User Guide to provide clearer explanation of BatchSize, 
> ChannelCapacity and ChannelTransactionCapacity properties.
> 
>
> Key: HADOOP-9198
> URL: https://issues.apache.org/jira/browse/HADOOP-9198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jeff Lord
>
> It would be good if we refined our wiki and user guide to help explain the 
> following in a more clear fashion:
> 1) Batch Size 
>   1.a) When configured by client code using the flume-core-sdk , to send 
> events to flume avro source.
> The flume client sdk has an appendBatch method. This will take a list of 
> events and send them to the source as a batch. This is the size of the number 
> of events to be passed to the source at one time.
>   1.b) When set as a parameter on HDFS sink (or other sinks which support 
> BatchSize parameter)
> This is the number of events written to file before it is flushed to HDFS
> 2)
>   2.a) Channel Capacity
> This is the maximum capacity number of events of the channel.
>   2.b) Channel Transaction Capacity.
> This is the max number of events stored in the channel per transaction.
> How will setting these parameters to different values, affect throughput, 
> latency in event flow?
> In general you will see better throughput by using memory channel as opposed 
> to using file channel at the loss of durability.
> The channel capacity is going to need to be sized such that it is large 
> enough to hold as many events as will be added to it by upstream agents. 
> Ideal flow would see the sink draining events from the channel faster than it 
> is having events added by its source.
> The channel transaction capacity will need to be smaller than the channel 
> capacity.
> e.g. If your Channel capacity is set to 1 than Channel Transaction 
> Capacity should be set to something like 100.
> Specifically if we have clients with varying frequency of event generation, 
> i.e. some clients generating thousands of events/sec, while
> others at a much slower rate, what effect will different values of these 
> params have on these clients ?
> Transaction Capacity is going to be what throttles or limits how many events 
> the source can put into the channel. This going to vary depending on how many 
> tiers of agents/collectors you have setup.
> In general though this should probably be equal to whatever you have the 
> batch size set to in your client.
> With regards to the hdfs batch size, the larger your batch size the better 
> performance will be. However, keep in mind that if a transaction fails the 
> entire transaction will be replayed which could have the implication of 
> duplicate events downstream.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9198) Update Flume Wiki and User Guide to provide clearer explanation of BatchSize, ChannelCapacity and ChannelTransactionCapacity properties.

2013-01-11 Thread Jeff Lord (JIRA)
Jeff Lord created HADOOP-9198:
-

 Summary: Update Flume Wiki and User Guide to provide clearer 
explanation of BatchSize, ChannelCapacity and ChannelTransactionCapacity 
properties.
 Key: HADOOP-9198
 URL: https://issues.apache.org/jira/browse/HADOOP-9198
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jeff Lord


It would be good if we refined our wiki and user guide to help explain the 
following in a more clear fashion:

1) Batch Size 
  1.a) When configured by client code using the flume-core-sdk , to send events 
to flume avro source.
The flume client sdk has an appendBatch method. This will take a list of events 
and send them to the source as a batch. This is the size of the number of 
events to be passed to the source at one time.

  1.b) When set as a parameter on HDFS sink (or other sinks which support 
BatchSize parameter)
This is the number of events written to file before it is flushed to HDFS

2)
  2.a) Channel Capacity
This is the maximum capacity number of events of the channel.

  2.b) Channel Transaction Capacity.
This is the max number of events stored in the channel per transaction.

How will setting these parameters to different values, affect throughput, 
latency in event flow?

In general you will see better throughput by using memory channel as opposed to 
using file channel at the loss of durability.

The channel capacity is going to need to be sized such that it is large enough 
to hold as many events as will be added to it by upstream agents. Ideal flow 
would see the sink draining events from the channel faster than it is having 
events added by its source.

The channel transaction capacity will need to be smaller than the channel 
capacity.
e.g. If your Channel capacity is set to 1 than Channel Transaction Capacity 
should be set to something like 100.

Specifically if we have clients with varying frequency of event generation, 
i.e. some clients generating thousands of events/sec, while
others at a much slower rate, what effect will different values of these params 
have on these clients ?

Transaction Capacity is going to be what throttles or limits how many events 
the source can put into the channel. This going to vary depending on how many 
tiers of agents/collectors you have setup.
In general though this should probably be equal to whatever you have the batch 
size set to in your client.

With regards to the hdfs batch size, the larger your batch size the better 
performance will be. However, keep in mind that if a transaction fails the 
entire transaction will be replayed which could have the implication of 
duplicate events downstream.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext

2013-01-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551242#comment-13551242
 ] 

Hadoop QA commented on HADOOP-9078:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12564436/HADOOP-9078-branch-2--b.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2025//console

This message is automatically generated.

> enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
> 
>
> Key: HADOOP-9078
> URL: https://issues.apache.org/jira/browse/HADOOP-9078
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, 
> HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2.patch, HADOOP-9078.patch, 
> HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext

2013-01-11 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9078:
---

Attachment: HADOOP-9078-branch-2--b.patch
HADOOP-9078--b.patch

The patches of version "--b" fix merge conflicts with some incoming changes.

> enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
> 
>
> Key: HADOOP-9078
> URL: https://issues.apache.org/jira/browse/HADOOP-9078
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, 
> HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2.patch, HADOOP-9078.patch, 
> HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-11 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551220#comment-13551220
 ] 

Tom White commented on HADOOP-9097:
---

Regarding the source files with third party licenses, by my reading of 
http://apache.org/legal/resolved.html#required-third-party-notices and the 
licenses in hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/tree.h and 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c,
 it is necessary to add the licenses to the Hadoop LICENSE.txt files. I see 
that lz4.c's license is in the common LICENSE.txt file, but tree.h isn't in 
HDFS's. So that should be fixed.

I notice that the following files are empty and can presumably be removed:
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataXceiverAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolProtocolBuffers/overview.html
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/MockApp.java
 
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/MockContainer.java
 

Though not strictly necessary, it would be nice to add .git/** and .idea/** to 
the top-level excludes.

With these changes I managed to get a clean run of {{mvn apache-rat:check}} 
with the combined patch.

+1 on the combined patch. Thanks a lot for doing this work Tom.

> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-11 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551176#comment-13551176
 ] 

Thomas Graves commented on HADOOP-9097:
---

the test has been failing on other builds and isn't related to this.  The 
release audit warnings are due to needing the other 3 jira.

> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2013-01-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551106#comment-13551106
 ] 

Hudson commented on HADOOP-8419:


Integrated in Hadoop-Mapreduce-trunk #1310 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1310/])
HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) 
(Revision 1431740)
HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 
1431739)

 Result = FAILURE
eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java

eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java


> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, 
> HADOOP-8419-branch1-v2.patch, HADOOP-8419-trunk.patch, 
> HADOOP-8419-trunk-v2.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2013-01-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551096#comment-13551096
 ] 

Hudson commented on HADOOP-8419:


Integrated in Hadoop-Hdfs-trunk #1282 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1282/])
HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) 
(Revision 1431740)
HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 
1431739)

 Result = FAILURE
eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java

eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java


> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, 
> HADOOP-8419-branch1-v2.patch, HADOOP-8419-trunk.patch, 
> HADOOP-8419-trunk-v2.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2013-01-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551034#comment-13551034
 ] 

Hudson commented on HADOOP-8419:


Integrated in Hadoop-Yarn-trunk #93 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/93/])
HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) 
(Revision 1431740)
HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 
1431739)

 Result = SUCCESS
eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java

eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java


> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, 
> HADOOP-8419-branch1-v2.patch, HADOOP-8419-trunk.patch, 
> HADOOP-8419-trunk-v2.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira