[jira] [Commented] (HADOOP-7602) wordcount, sort etc on har files fails with NPE

2011-09-09 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101995#comment-13101995
 ] 

Jitendra Nath Pandey commented on HADOOP-7602:
--

The change in buildDTAuthority(InetSocketAddress addr) in SecurityUtil:
{quote}
\-String host= addr.getAddress().getHostAddress();
\+String host = addr.getHostName();
\+if (host == null) {
\+  host = addr.getAddress().getHostAddress();
\+\}
{quote}

This will cause to return hostname instead of ip in usual case. HADOOP-7510 is 
modifying that behavior, this patch should not change it. 


> wordcount, sort etc on har files fails with NPE
> ---
>
> Key: HADOOP-7602
> URL: https://issues.apache.org/jira/browse/HADOOP-7602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.20.206.0
>Reporter: John George
>Assignee: John George
> Fix For: 0.20.205.0
>
> Attachments: hadoop-7602.1.patch, hadoop-7602.2.patch, 
> hadoop-7602.3.patch, hadoop-7602.daryns_comment.2.patch, 
> hadoop-7602.daryns_comment.patch, hadoop-7602.patch
>
>
> wordcount, sort etc on har files fails with 
> NPE@createSocketAddr(NetUtils.java:137). 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7328) When a serializer class is missing, return null, not throw an NPE.

2011-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101990#comment-13101990
 ] 

Hadoop QA commented on HADOOP-7328:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12493899/0.20-security-HADOOP-7328.r7.diff
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/158//console

This message is automatically generated.

> When a serializer class is missing, return null, not throw an NPE.
> --
>
> Key: HADOOP-7328
> URL: https://issues.apache.org/jira/browse/HADOOP-7328
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: io, serialization
> Fix For: 0.23.0, 0.24.0
>
> Attachments: 0.20-security-HADOOP-7328.r7.diff, 
> 0.23-HADOOP-7328.r7.diff, HADOOP-7328.r1.diff, HADOOP-7328.r2.diff, 
> HADOOP-7328.r3.diff, HADOOP-7328.r4.diff, HADOOP-7328.r4.diff, 
> HADOOP-7328.r5.diff, HADOOP-7328.r6.diff, HADOOP-7328.r7.diff
>
>
> When you have a key/value class that's non Writable and you forget to attach 
> io.serializers for the same, an NPE is thrown by the tasks with no 
> information on why or what's missing and what led to it. I think a better 
> exception can be thrown by SerializationFactory instead of an NPE when a 
> class is not found accepted by any of the loaded ones.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-5814) NativeS3FileSystem doesn't report progress when writing

2011-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101986#comment-13101986
 ] 

Hadoop QA commented on HADOOP-5814:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12493897/HADOOP-5814.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  
org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/157//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/157//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-auth.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/157//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/157//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/157//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-auth-examples.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/157//console

This message is automatically generated.

> NativeS3FileSystem doesn't report progress when writing
> ---
>
> Key: HADOOP-5814
> URL: https://issues.apache.org/jira/browse/HADOOP-5814
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Tom White
>Assignee: Devaraj K
>  Labels: S3Native
> Attachments: HADOOP-5814.patch
>
>
> This results in timeouts since the whole file is uploaded in the close 
> method. See 
> http://www.mail-archive.com/core-user@hadoop.apache.org/msg09881.html.
> One solution is to keep a reference to the Progressable passed in to the 
> NativeS3FsOutputStream's constructor, and progress it during writes, and 
> while copying the backup file to S3 in the close method.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7328) When a serializer class is missing, return null, not throw an NPE.

2011-09-09 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-7328:


Attachment: (was: 0.22-HADOOP-7328.r7.diff)

> When a serializer class is missing, return null, not throw an NPE.
> --
>
> Key: HADOOP-7328
> URL: https://issues.apache.org/jira/browse/HADOOP-7328
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: io, serialization
> Fix For: 0.23.0, 0.24.0
>
> Attachments: 0.20-security-HADOOP-7328.r7.diff, 
> 0.23-HADOOP-7328.r7.diff, HADOOP-7328.r1.diff, HADOOP-7328.r2.diff, 
> HADOOP-7328.r3.diff, HADOOP-7328.r4.diff, HADOOP-7328.r4.diff, 
> HADOOP-7328.r5.diff, HADOOP-7328.r6.diff, HADOOP-7328.r7.diff
>
>
> When you have a key/value class that's non Writable and you forget to attach 
> io.serializers for the same, an NPE is thrown by the tasks with no 
> information on why or what's missing and what led to it. I think a better 
> exception can be thrown by SerializationFactory instead of an NPE when a 
> class is not found accepted by any of the loaded ones.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7328) When a serializer class is missing, return null, not throw an NPE.

2011-09-09 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-7328:


Attachment: 0.20-security-HADOOP-7328.r7.diff

Ditto patch for 0.20-security.

* No tab/space issues, so those changes are not present.
* CommonConfigurationKeys does not seem populated enough in 0.20-security, so 
that part of the patch isn't present either.

Tests pass
{code}

[junit] Running org.apache.hadoop.io.serializer.TestSerializationFactory
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.252 sec
{code}

Will upload complementing patch to MAPREDUCE-2584 since it is on the mapred/ 
side.

> When a serializer class is missing, return null, not throw an NPE.
> --
>
> Key: HADOOP-7328
> URL: https://issues.apache.org/jira/browse/HADOOP-7328
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: io, serialization
> Fix For: 0.23.0, 0.24.0
>
> Attachments: 0.20-security-HADOOP-7328.r7.diff, 
> 0.23-HADOOP-7328.r7.diff, HADOOP-7328.r1.diff, HADOOP-7328.r2.diff, 
> HADOOP-7328.r3.diff, HADOOP-7328.r4.diff, HADOOP-7328.r4.diff, 
> HADOOP-7328.r5.diff, HADOOP-7328.r6.diff, HADOOP-7328.r7.diff
>
>
> When you have a key/value class that's non Writable and you forget to attach 
> io.serializers for the same, an NPE is thrown by the tasks with no 
> information on why or what's missing and what led to it. I think a better 
> exception can be thrown by SerializationFactory instead of an NPE when a 
> class is not found accepted by any of the loaded ones.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-5814) NativeS3FileSystem doesn't report progress when writing

2011-09-09 Thread Subroto Sanyal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subroto Sanyal updated HADOOP-5814:
---

  Tags: hadoop common, S3Native
Labels: S3Native  (was: )
Status: Patch Available  (was: Open)

> NativeS3FileSystem doesn't report progress when writing
> ---
>
> Key: HADOOP-5814
> URL: https://issues.apache.org/jira/browse/HADOOP-5814
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Tom White
>Assignee: Devaraj K
>  Labels: S3Native
> Attachments: HADOOP-5814.patch
>
>
> This results in timeouts since the whole file is uploaded in the close 
> method. See 
> http://www.mail-archive.com/core-user@hadoop.apache.org/msg09881.html.
> One solution is to keep a reference to the Progressable passed in to the 
> NativeS3FsOutputStream's constructor, and progress it during writes, and 
> while copying the backup file to S3 in the close method.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-5814) NativeS3FileSystem doesn't report progress when writing

2011-09-09 Thread Subroto Sanyal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subroto Sanyal updated HADOOP-5814:
---

Attachment: HADOOP-5814.patch

The patch is on the trunk version.
The patch creates a input Stream which handles the Progressable reference and 
makes the call periodically while reading from the underlying stream. The patch 
is tested in our cluster and works fine.

> NativeS3FileSystem doesn't report progress when writing
> ---
>
> Key: HADOOP-5814
> URL: https://issues.apache.org/jira/browse/HADOOP-5814
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Tom White
>Assignee: Devaraj K
> Attachments: HADOOP-5814.patch
>
>
> This results in timeouts since the whole file is uploaded in the close 
> method. See 
> http://www.mail-archive.com/core-user@hadoop.apache.org/msg09881.html.
> One solution is to keep a reference to the Progressable passed in to the 
> NativeS3FsOutputStream's constructor, and progress it during writes, and 
> while copying the backup file to S3 in the close method.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7602) wordcount, sort etc on har files fails with NPE

2011-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101969#comment-13101969
 ] 

Hadoop QA commented on HADOOP-7602:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12493893/hadoop-7602.3.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/156//console

This message is automatically generated.

> wordcount, sort etc on har files fails with NPE
> ---
>
> Key: HADOOP-7602
> URL: https://issues.apache.org/jira/browse/HADOOP-7602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.20.206.0
>Reporter: John George
>Assignee: John George
> Fix For: 0.20.205.0
>
> Attachments: hadoop-7602.1.patch, hadoop-7602.2.patch, 
> hadoop-7602.3.patch, hadoop-7602.daryns_comment.2.patch, 
> hadoop-7602.daryns_comment.patch, hadoop-7602.patch
>
>
> wordcount, sort etc on har files fails with 
> NPE@createSocketAddr(NetUtils.java:137). 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7602) wordcount, sort etc on har files fails with NPE

2011-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101964#comment-13101964
 ] 

Hadoop QA commented on HADOOP-7602:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12493893/hadoop-7602.3.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/155//console

This message is automatically generated.

> wordcount, sort etc on har files fails with NPE
> ---
>
> Key: HADOOP-7602
> URL: https://issues.apache.org/jira/browse/HADOOP-7602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.20.206.0
>Reporter: John George
>Assignee: John George
> Fix For: 0.20.205.0
>
> Attachments: hadoop-7602.1.patch, hadoop-7602.2.patch, 
> hadoop-7602.3.patch, hadoop-7602.daryns_comment.2.patch, 
> hadoop-7602.daryns_comment.patch, hadoop-7602.patch
>
>
> wordcount, sort etc on har files fails with 
> NPE@createSocketAddr(NetUtils.java:137). 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7602) wordcount, sort etc on har files fails with NPE

2011-09-09 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-7602:


Attachment: hadoop-7602.3.patch

Attaching a patch upgraded to the latest 205 with tests etc...

> wordcount, sort etc on har files fails with NPE
> ---
>
> Key: HADOOP-7602
> URL: https://issues.apache.org/jira/browse/HADOOP-7602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.20.206.0
>Reporter: John George
>Assignee: John George
> Fix For: 0.20.205.0
>
> Attachments: hadoop-7602.1.patch, hadoop-7602.2.patch, 
> hadoop-7602.3.patch, hadoop-7602.daryns_comment.2.patch, 
> hadoop-7602.daryns_comment.patch, hadoop-7602.patch
>
>
> wordcount, sort etc on har files fails with 
> NPE@createSocketAddr(NetUtils.java:137). 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7602) wordcount, sort etc on har files fails with NPE

2011-09-09 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-7602:


Attachment: hadoop-7602.2.patch

Patch based on the latest 205 branch

> wordcount, sort etc on har files fails with NPE
> ---
>
> Key: HADOOP-7602
> URL: https://issues.apache.org/jira/browse/HADOOP-7602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.20.206.0
>Reporter: John George
>Assignee: John George
> Fix For: 0.20.205.0
>
> Attachments: hadoop-7602.1.patch, hadoop-7602.2.patch, 
> hadoop-7602.daryns_comment.2.patch, hadoop-7602.daryns_comment.patch, 
> hadoop-7602.patch
>
>
> wordcount, sort etc on har files fails with 
> NPE@createSocketAddr(NetUtils.java:137). 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7328) When a serializer class is missing, return null, not throw an NPE.

2011-09-09 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101957#comment-13101957
 ] 

Harsh J commented on HADOOP-7328:
-

Yes, I'll post a 0.20-security backport as well.

> When a serializer class is missing, return null, not throw an NPE.
> --
>
> Key: HADOOP-7328
> URL: https://issues.apache.org/jira/browse/HADOOP-7328
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: io, serialization
> Fix For: 0.23.0, 0.24.0
>
> Attachments: 0.22-HADOOP-7328.r7.diff, 0.23-HADOOP-7328.r7.diff, 
> HADOOP-7328.r1.diff, HADOOP-7328.r2.diff, HADOOP-7328.r3.diff, 
> HADOOP-7328.r4.diff, HADOOP-7328.r4.diff, HADOOP-7328.r5.diff, 
> HADOOP-7328.r6.diff, HADOOP-7328.r7.diff
>
>
> When you have a key/value class that's non Writable and you forget to attach 
> io.serializers for the same, an NPE is thrown by the tasks with no 
> information on why or what's missing and what led to it. I think a better 
> exception can be thrown by SerializationFactory instead of an NPE when a 
> class is not found accepted by any of the loaded ones.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7119) add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT web-consoles

2011-09-09 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101727#comment-13101727
 ] 

Sanjay Radia commented on HADOOP-7119:
--

Last patch has the site stuff. Ignore my previous comment about apt and 
forrest; I misunderstood Alejandro.

> add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT 
> web-consoles
> --
>
> Key: HADOOP-7119
> URL: https://issues.apache.org/jira/browse/HADOOP-7119
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.23.0
> Environment: all
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7119v3.patch, HADOOP-7119v4-amendment.patch, 
> HADOOP-7119v4.patch, HADOOP-7119v5.patch, HADOOP-7119v6.patch, 
> ha-common-01.patch, ha-common-02.patch, ha-commons.patch, 
> spnego-20-security.patch, spnego-20-security2.patch, 
> spnego-20-security3.patch, spnego-20-security4.patch
>
>
> Currently the JT/NN/DN/TT web-consoles don't support any form of 
> authentication.
> Hadoop RPC API already supports Kerberos authentication.
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to Hadoop web consoles would provide 
> a unified authentication mechanism and single sign-on for Hadoop web UI and 
> Hadoop RPC.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7610) /etc/profile.d does not exist on Debian

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101725#comment-13101725
 ] 

Hudson commented on HADOOP-7610:


Integrated in Hadoop-Common-trunk-Commit #865 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/865/])
HADOOP-7610. Fix for hadoop debian package. Contributed by Eric Yang

gkesavan : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1167428
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/update-hadoop-env.sh


> /etc/profile.d does not exist on Debian
> ---
>
> Key: HADOOP-7610
> URL: https://issues.apache.org/jira/browse/HADOOP-7610
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.204.0, 0.23.0
> Environment: Java 6, Debian
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7610-branch-0.20-security.patch, HADOOP-7610.patch
>
>
> As part of post installation script, there is a symlink created in 
> /etc/profile.d/hadoop-env.sh to source /etc/hadoop/hadoop-env.sh.  Therefore, 
> users do not need to configure HADOOP_* environment.  Unfortunately, 
> /etc/profile.d only exists in Ubuntu.  [Section 9.9 of the Debian 
> Policy|http://www.debian.org/doc/debian-policy/ch-opersys.html#s9.9] states:
> {quote}
> A program must not depend on environment variables to get reasonable 
> defaults. (That's because these environment variables would have to be set in 
> a system-wide configuration file like /etc/profile, which is not supported by 
> all shells.)
> If a program usually depends on environment variables for its configuration, 
> the program should be changed to fall back to a reasonable default 
> configuration if these environment variables are not present. If this cannot 
> be done easily (e.g., if the source code of a non-free program is not 
> available), the program must be replaced by a small "wrapper" shell script 
> which sets the environment variables if they are not already defined, and 
> calls the original program.
> Here is an example of a wrapper script for this purpose:
> {noformat}
>  #!/bin/sh
>  BAR=${BAR:-/var/lib/fubar}
>  export BAR
>  exec /usr/lib/foo/foo "$@"
> {noformat}
> Furthermore, as /etc/profile is a configuration file of the base-files 
> package, other packages must not put any environment variables or other 
> commands into that file.
> {quote}
> Hence the default environment setup should skip for Debian.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7610) /etc/profile.d does not exist on Debian

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101723#comment-13101723
 ] 

Hudson commented on HADOOP-7610:


Integrated in Hadoop-Hdfs-trunk-Commit #942 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/942/])
HADOOP-7610. Fix for hadoop debian package. Contributed by Eric Yang

gkesavan : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1167428
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/update-hadoop-env.sh


> /etc/profile.d does not exist on Debian
> ---
>
> Key: HADOOP-7610
> URL: https://issues.apache.org/jira/browse/HADOOP-7610
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.204.0, 0.23.0
> Environment: Java 6, Debian
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7610-branch-0.20-security.patch, HADOOP-7610.patch
>
>
> As part of post installation script, there is a symlink created in 
> /etc/profile.d/hadoop-env.sh to source /etc/hadoop/hadoop-env.sh.  Therefore, 
> users do not need to configure HADOOP_* environment.  Unfortunately, 
> /etc/profile.d only exists in Ubuntu.  [Section 9.9 of the Debian 
> Policy|http://www.debian.org/doc/debian-policy/ch-opersys.html#s9.9] states:
> {quote}
> A program must not depend on environment variables to get reasonable 
> defaults. (That's because these environment variables would have to be set in 
> a system-wide configuration file like /etc/profile, which is not supported by 
> all shells.)
> If a program usually depends on environment variables for its configuration, 
> the program should be changed to fall back to a reasonable default 
> configuration if these environment variables are not present. If this cannot 
> be done easily (e.g., if the source code of a non-free program is not 
> available), the program must be replaced by a small "wrapper" shell script 
> which sets the environment variables if they are not already defined, and 
> calls the original program.
> Here is an example of a wrapper script for this purpose:
> {noformat}
>  #!/bin/sh
>  BAR=${BAR:-/var/lib/fubar}
>  export BAR
>  exec /usr/lib/foo/foo "$@"
> {noformat}
> Furthermore, as /etc/profile is a configuration file of the base-files 
> package, other packages must not put any environment variables or other 
> commands into that file.
> {quote}
> Hence the default environment setup should skip for Debian.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7610) /etc/profile.d does not exist on Debian

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101720#comment-13101720
 ] 

Hudson commented on HADOOP-7610:


Integrated in Hadoop-Mapreduce-trunk-Commit #876 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/876/])
HADOOP-7610. Fix for hadoop debian package. Contributed by Eric Yang

gkesavan : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1167428
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/update-hadoop-env.sh


> /etc/profile.d does not exist on Debian
> ---
>
> Key: HADOOP-7610
> URL: https://issues.apache.org/jira/browse/HADOOP-7610
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.204.0, 0.23.0
> Environment: Java 6, Debian
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7610-branch-0.20-security.patch, HADOOP-7610.patch
>
>
> As part of post installation script, there is a symlink created in 
> /etc/profile.d/hadoop-env.sh to source /etc/hadoop/hadoop-env.sh.  Therefore, 
> users do not need to configure HADOOP_* environment.  Unfortunately, 
> /etc/profile.d only exists in Ubuntu.  [Section 9.9 of the Debian 
> Policy|http://www.debian.org/doc/debian-policy/ch-opersys.html#s9.9] states:
> {quote}
> A program must not depend on environment variables to get reasonable 
> defaults. (That's because these environment variables would have to be set in 
> a system-wide configuration file like /etc/profile, which is not supported by 
> all shells.)
> If a program usually depends on environment variables for its configuration, 
> the program should be changed to fall back to a reasonable default 
> configuration if these environment variables are not present. If this cannot 
> be done easily (e.g., if the source code of a non-free program is not 
> available), the program must be replaced by a small "wrapper" shell script 
> which sets the environment variables if they are not already defined, and 
> calls the original program.
> Here is an example of a wrapper script for this purpose:
> {noformat}
>  #!/bin/sh
>  BAR=${BAR:-/var/lib/fubar}
>  export BAR
>  exec /usr/lib/foo/foo "$@"
> {noformat}
> Furthermore, as /etc/profile is a configuration file of the base-files 
> package, other packages must not put any environment variables or other 
> commands into that file.
> {quote}
> Hence the default environment setup should skip for Debian.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7610) /etc/profile.d does not exist on Debian

2011-09-09 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan updated HADOOP-7610:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks Eric, I just committed this patch to trunk and 20-security branch

> /etc/profile.d does not exist on Debian
> ---
>
> Key: HADOOP-7610
> URL: https://issues.apache.org/jira/browse/HADOOP-7610
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.204.0, 0.23.0
> Environment: Java 6, Debian
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7610-branch-0.20-security.patch, HADOOP-7610.patch
>
>
> As part of post installation script, there is a symlink created in 
> /etc/profile.d/hadoop-env.sh to source /etc/hadoop/hadoop-env.sh.  Therefore, 
> users do not need to configure HADOOP_* environment.  Unfortunately, 
> /etc/profile.d only exists in Ubuntu.  [Section 9.9 of the Debian 
> Policy|http://www.debian.org/doc/debian-policy/ch-opersys.html#s9.9] states:
> {quote}
> A program must not depend on environment variables to get reasonable 
> defaults. (That's because these environment variables would have to be set in 
> a system-wide configuration file like /etc/profile, which is not supported by 
> all shells.)
> If a program usually depends on environment variables for its configuration, 
> the program should be changed to fall back to a reasonable default 
> configuration if these environment variables are not present. If this cannot 
> be done easily (e.g., if the source code of a non-free program is not 
> available), the program must be replaced by a small "wrapper" shell script 
> which sets the environment variables if they are not already defined, and 
> calls the original program.
> Here is an example of a wrapper script for this purpose:
> {noformat}
>  #!/bin/sh
>  BAR=${BAR:-/var/lib/fubar}
>  export BAR
>  exec /usr/lib/foo/foo "$@"
> {noformat}
> Furthermore, as /etc/profile is a configuration file of the base-files 
> package, other packages must not put any environment variables or other 
> commands into that file.
> {quote}
> Hence the default environment setup should skip for Debian.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101715#comment-13101715
 ] 

Eric Yang commented on HADOOP-7599:
---

In the patch, I put in the configuration properties that are not security 
related but are good configurations that are tested to work in large 0.20.2xx 
clusters.

mapred.heartbeats.in.second - enable HADOOP:5784
mapreduce.tasktracker.outofband.heartbeat - enable MAPREDUCE:270
mapred.jobtracker.maxtasks.per.job - safe guard job tracker from running out of 
memory
mapred.jobtracker.retirejob.check - 1
mapred.jobtracker.retirejob.interval - 0
mapred.map.tasks.speculative.execution - false
mapred.reduce.tasks.speculative.execution - false
mapred.tasktracker.tasks.sleeptime-before-sigkill - 250

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-3.patch, HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk-3.patch, 
> HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7621) alfredo config should be in a file not readable by users

2011-09-09 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101709#comment-13101709
 ] 

Alejandro Abdelnur commented on HADOOP-7621:


Currently we have a config file for the task controller which has similar 
security requirements.

We could either consolidate things in a user non-readable security-site.xml 
file.

And/Or (but this is larger scope) we should decouple user config from cluster 
config.

> alfredo config should be in a file not readable by users
> 
>
> Key: HADOOP-7621
> URL: https://issues.apache.org/jira/browse/HADOOP-7621
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.20.205.0, 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Priority: Critical
> Fix For: 0.20.205.0, 0.23.0, 0.24.0
>
>
> [thxs ATM for point this one out]
> Alfredo configuration currently is stored in the core-site.xml file, this 
> file is readable by users (it must be as Configuration defaults must be 
> loaded).
> One of Alfredo config values is a secret which is used by all nodes to 
> sign/verify the authentication cookie.
> A user could get hold of this secret and forge authentication cookies for 
> other users.
> Because of this the Alfredo configuration, should be move to a user 
> non-readable file.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7621) alfredo config should be in a file not readable by users

2011-09-09 Thread Alejandro Abdelnur (JIRA)
alfredo config should be in a file not readable by users


 Key: HADOOP-7621
 URL: https://issues.apache.org/jira/browse/HADOOP-7621
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.20.205.0, 0.23.0, 0.24.0
Reporter: Alejandro Abdelnur
Priority: Critical
 Fix For: 0.20.205.0, 0.23.0, 0.24.0


[thxs ATM for point this one out]

Alfredo configuration currently is stored in the core-site.xml file, this file 
is readable by users (it must be as Configuration defaults must be loaded).

One of Alfredo config values is a secret which is used by all nodes to 
sign/verify the authentication cookie.

A user could get hold of this secret and forge authentication cookies for other 
users.

Because of this the Alfredo configuration, should be move to a user 
non-readable file.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101702#comment-13101702
 ] 

Hadoop QA commented on HADOOP-7599:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12493878/HADOOP-7599-3.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/154//console

This message is automatically generated.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-3.patch, HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk-3.patch, 
> HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7119) add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT web-consoles

2011-09-09 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-7119:
-

Attachment: spnego-20-security4.patch

> add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT 
> web-consoles
> --
>
> Key: HADOOP-7119
> URL: https://issues.apache.org/jira/browse/HADOOP-7119
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.23.0
> Environment: all
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7119v3.patch, HADOOP-7119v4-amendment.patch, 
> HADOOP-7119v4.patch, HADOOP-7119v5.patch, HADOOP-7119v6.patch, 
> ha-common-01.patch, ha-common-02.patch, ha-commons.patch, 
> spnego-20-security.patch, spnego-20-security2.patch, 
> spnego-20-security3.patch, spnego-20-security4.patch
>
>
> Currently the JT/NN/DN/TT web-consoles don't support any form of 
> authentication.
> Hadoop RPC API already supports Kerberos authentication.
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to Hadoop web consoles would provide 
> a unified authentication mechanism and single sign-on for Hadoop web UI and 
> Hadoop RPC.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-7599:
--

Attachment: HADOOP-7599-3.patch
HADOOP-7599-trunk-3.patch

Update reference to task tracker group.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-3.patch, HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk-3.patch, 
> HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101685#comment-13101685
 ] 

Devaraj Das commented on HADOOP-7599:
-

Please remove the config properties that aren't generated by the scripts in 
this patch (bullet 12 in my first comment). I also noticed now that there are 
references to 'hadoop' group still present. Please replace them with the 
variable you defined for the special group.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-3.patch, HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk-3.patch, 
> HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7510) Tokens should use original hostname provided instead of ip

2011-09-09 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101672#comment-13101672
 ] 

Jitendra Nath Pandey commented on HADOOP-7510:
--

The patch needs to be updated, because its conflicting after MAPREDUCE-2764 
commit.

> Tokens should use original hostname provided instead of ip
> --
>
> Key: HADOOP-7510
> URL: https://issues.apache.org/jira/browse/HADOOP-7510
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7510-2.patch, HADOOP-7510-3.patch, 
> HADOOP-7510-4.patch, HADOOP-7510.patch
>
>
> Tokens currently store the ip:port of the remote server.  This precludes 
> tokens from being used after a host's ip is changed.  Tokens should store the 
> hostname used to make the RPC connection.  This will enable new processes to 
> use their existing tokens.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6722) NetUtils.connect should check that it hasn't connected a socket to itself

2011-09-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-6722:


Fix Version/s: 0.20.205.0

> NetUtils.connect should check that it hasn't connected a socket to itself
> -
>
> Key: HADOOP-6722
> URL: https://issues.apache.org/jira/browse/HADOOP-6722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 0.20.205.0, 0.21.0
>
> Attachments: HADOOP-6722.20s.patch, hadoop-6722.txt
>
>
> I had no idea this was possible, but it turns out that a TCP connection will 
> be established in the rare case that the local side of the socket binds to 
> the ephemeral port that you later try to connect to. This can present itself 
> in very very rare occasion when an RPC client is trying to connect to a 
> daemon running on the same node, but that daemon is down. To see what I'm 
> talking about, run "while true ; do telnet localhost 60020 ; done" on a 
> multicore box and wait several minutes.
> This can be easily detected in NetUtils.connect by making sure the local 
> address/port is not equal to the remote address/port.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101643#comment-13101643
 ] 

Matt Foley commented on HADOOP-7599:


Hi, I talked with Devaraj about item #9 in his comment 
[above|https://issues.apache.org/jira/browse/HADOOP-7599?focusedCommentId=13099188&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13099188].
  

I believe that for a simple setup what Eric is doing is okay, and I would like 
to have it in 205.  Please go ahead and commit, and we can continue making it 
better in the next release, perhaps by using rpm and deb upgrade/update 
features as Devaraj mentioned.  Thanks.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101636#comment-13101636
 ] 

Hadoop QA commented on HADOOP-7599:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12493865/HADOOP-7599-trunk-2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/153//console

This message is automatically generated.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-5983) Namenode shouldn't read mapred-site.xml

2011-09-09 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers reopened HADOOP-5983:



I can confirm that this issue still exists on branch-0.20-security. Reopening 
it so it can be fixed on that branch.

Daryn, do you have time/interest to address this issue for 
branch-0.20-security? If not, I'd like to unassign it or maybe assign it to 
myself.

> Namenode shouldn't read mapred-site.xml
> ---
>
> Key: HADOOP-5983
> URL: https://issues.apache.org/jira/browse/HADOOP-5983
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.20.206.0
>Reporter: Rajiv Chittajallu
>Assignee: Daryn Sharp
>
> The name node seem to read mapred-site.xml and fails if it can't parse it.
> 2009-06-05 22:37:15,663 FATAL org.apache.hadoop.conf.Configuration: error 
> parsing conf file: org.xml.sax.SAXParseException: Error attempting to parse 
> XML file (href='/hadoop/conf/local/local-mapred-site.xml').
> 2009-06-05 22:37:15,664 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.RuntimeException: 
> org.xml.sax.SAXParseException: Error attempting to parse XML file 
> (href='/hadoop/conf/local/local-mapred-site.xml').
> In our config,  local-mapred-site.xml is included only in mapred-site.xml 
> which we don't push to the namenode.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-5983) Namenode shouldn't read mapred-site.xml

2011-09-09 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-5983:
---

Affects Version/s: (was: 0.20.0)
   0.20.206.0

> Namenode shouldn't read mapred-site.xml
> ---
>
> Key: HADOOP-5983
> URL: https://issues.apache.org/jira/browse/HADOOP-5983
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.20.206.0
>Reporter: Rajiv Chittajallu
>Assignee: Daryn Sharp
>
> The name node seem to read mapred-site.xml and fails if it can't parse it.
> 2009-06-05 22:37:15,663 FATAL org.apache.hadoop.conf.Configuration: error 
> parsing conf file: org.xml.sax.SAXParseException: Error attempting to parse 
> XML file (href='/hadoop/conf/local/local-mapred-site.xml').
> 2009-06-05 22:37:15,664 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.RuntimeException: 
> org.xml.sax.SAXParseException: Error attempting to parse XML file 
> (href='/hadoop/conf/local/local-mapred-site.xml').
> In our config,  local-mapred-site.xml is included only in mapred-site.xml 
> which we don't push to the namenode.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-7599:
--

Attachment: HADOOP-7599-trunk-2.patch
HADOOP-7599-2.patch

- Make sure keytab file ownership are setup correctly.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7610) /etc/profile.d does not exist on Debian

2011-09-09 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan updated HADOOP-7610:
---

Hadoop Flags: [Reviewed]

> /etc/profile.d does not exist on Debian
> ---
>
> Key: HADOOP-7610
> URL: https://issues.apache.org/jira/browse/HADOOP-7610
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.204.0, 0.23.0
> Environment: Java 6, Debian
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7610-branch-0.20-security.patch, HADOOP-7610.patch
>
>
> As part of post installation script, there is a symlink created in 
> /etc/profile.d/hadoop-env.sh to source /etc/hadoop/hadoop-env.sh.  Therefore, 
> users do not need to configure HADOOP_* environment.  Unfortunately, 
> /etc/profile.d only exists in Ubuntu.  [Section 9.9 of the Debian 
> Policy|http://www.debian.org/doc/debian-policy/ch-opersys.html#s9.9] states:
> {quote}
> A program must not depend on environment variables to get reasonable 
> defaults. (That's because these environment variables would have to be set in 
> a system-wide configuration file like /etc/profile, which is not supported by 
> all shells.)
> If a program usually depends on environment variables for its configuration, 
> the program should be changed to fall back to a reasonable default 
> configuration if these environment variables are not present. If this cannot 
> be done easily (e.g., if the source code of a non-free program is not 
> available), the program must be replaced by a small "wrapper" shell script 
> which sets the environment variables if they are not already defined, and 
> calls the original program.
> Here is an example of a wrapper script for this purpose:
> {noformat}
>  #!/bin/sh
>  BAR=${BAR:-/var/lib/fubar}
>  export BAR
>  exec /usr/lib/foo/foo "$@"
> {noformat}
> Furthermore, as /etc/profile is a configuration file of the base-files 
> package, other packages must not put any environment variables or other 
> commands into that file.
> {quote}
> Hence the default environment setup should skip for Debian.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7119) add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT web-consoles

2011-09-09 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-7119:
-

Attachment: spnego-20-security3.patch

updated patch
Still working on site.xml - this is in apt and will need to be ported to 
forrest for 20.

> add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT 
> web-consoles
> --
>
> Key: HADOOP-7119
> URL: https://issues.apache.org/jira/browse/HADOOP-7119
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.23.0
> Environment: all
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7119v3.patch, HADOOP-7119v4-amendment.patch, 
> HADOOP-7119v4.patch, HADOOP-7119v5.patch, HADOOP-7119v6.patch, 
> ha-common-01.patch, ha-common-02.patch, ha-commons.patch, 
> spnego-20-security.patch, spnego-20-security2.patch, spnego-20-security3.patch
>
>
> Currently the JT/NN/DN/TT web-consoles don't support any form of 
> authentication.
> Hadoop RPC API already supports Kerberos authentication.
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to Hadoop web consoles would provide 
> a unified authentication mechanism and single sign-on for Hadoop web UI and 
> Hadoop RPC.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101599#comment-13101599
 ] 

Devaraj Das commented on HADOOP-7599:
-

Went over the patch. Some comments:
1. Don't chmod the keytab dir contents to 755. The keytab files should be owned 
by the user running the respective daemon, and 700ed.
2. On the bullet#9 in my last comment, you can do a check for empty config 
files (like if the strings '' and/or '' occurs, the config 
file is not empty). Not pretty but safer.. Long term, Hadoop could stop 
shipping the empty config files.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-trunk.patch, 
> HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7328) When a serializer class is missing, return null, not throw an NPE.

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101588#comment-13101588
 ] 

Hudson commented on HADOOP-7328:


Integrated in Hadoop-Mapreduce-trunk-Commit #873 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/873/])
HADOOP-7328. When a serializer class is missing, return null, not throw an 
NPE. Contributed by Harsh J Chouraria.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1167363
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/serializer/TestSerializationFactory.java


> When a serializer class is missing, return null, not throw an NPE.
> --
>
> Key: HADOOP-7328
> URL: https://issues.apache.org/jira/browse/HADOOP-7328
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: io, serialization
> Fix For: 0.23.0, 0.24.0
>
> Attachments: 0.22-HADOOP-7328.r7.diff, 0.23-HADOOP-7328.r7.diff, 
> HADOOP-7328.r1.diff, HADOOP-7328.r2.diff, HADOOP-7328.r3.diff, 
> HADOOP-7328.r4.diff, HADOOP-7328.r4.diff, HADOOP-7328.r5.diff, 
> HADOOP-7328.r6.diff, HADOOP-7328.r7.diff
>
>
> When you have a key/value class that's non Writable and you forget to attach 
> io.serializers for the same, an NPE is thrown by the tasks with no 
> information on why or what's missing and what led to it. I think a better 
> exception can be thrown by SerializationFactory instead of an NPE when a 
> class is not found accepted by any of the loaded ones.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7328) When a serializer class is missing, return null, not throw an NPE.

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101583#comment-13101583
 ] 

Hudson commented on HADOOP-7328:


Integrated in Hadoop-Hdfs-trunk-Commit #939 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/939/])
HADOOP-7328. When a serializer class is missing, return null, not throw an 
NPE. Contributed by Harsh J Chouraria.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1167363
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/serializer/TestSerializationFactory.java


> When a serializer class is missing, return null, not throw an NPE.
> --
>
> Key: HADOOP-7328
> URL: https://issues.apache.org/jira/browse/HADOOP-7328
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: io, serialization
> Fix For: 0.23.0, 0.24.0
>
> Attachments: 0.22-HADOOP-7328.r7.diff, 0.23-HADOOP-7328.r7.diff, 
> HADOOP-7328.r1.diff, HADOOP-7328.r2.diff, HADOOP-7328.r3.diff, 
> HADOOP-7328.r4.diff, HADOOP-7328.r4.diff, HADOOP-7328.r5.diff, 
> HADOOP-7328.r6.diff, HADOOP-7328.r7.diff
>
>
> When you have a key/value class that's non Writable and you forget to attach 
> io.serializers for the same, an NPE is thrown by the tasks with no 
> information on why or what's missing and what led to it. I think a better 
> exception can be thrown by SerializationFactory instead of an NPE when a 
> class is not found accepted by any of the loaded ones.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7328) When a serializer class is missing, return null, not throw an NPE.

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101581#comment-13101581
 ] 

Hudson commented on HADOOP-7328:


Integrated in Hadoop-Common-trunk-Commit #862 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/862/])
HADOOP-7328. When a serializer class is missing, return null, not throw an 
NPE. Contributed by Harsh J Chouraria.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1167363
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/serializer/TestSerializationFactory.java


> When a serializer class is missing, return null, not throw an NPE.
> --
>
> Key: HADOOP-7328
> URL: https://issues.apache.org/jira/browse/HADOOP-7328
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: io, serialization
> Fix For: 0.23.0, 0.24.0
>
> Attachments: 0.22-HADOOP-7328.r7.diff, 0.23-HADOOP-7328.r7.diff, 
> HADOOP-7328.r1.diff, HADOOP-7328.r2.diff, HADOOP-7328.r3.diff, 
> HADOOP-7328.r4.diff, HADOOP-7328.r4.diff, HADOOP-7328.r5.diff, 
> HADOOP-7328.r6.diff, HADOOP-7328.r7.diff
>
>
> When you have a key/value class that's non Writable and you forget to attach 
> io.serializers for the same, an NPE is thrown by the tasks with no 
> information on why or what's missing and what led to it. I think a better 
> exception can be thrown by SerializationFactory instead of an NPE when a 
> class is not found accepted by any of the loaded ones.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7328) When a serializer class is missing, return null, not throw an NPE.

2011-09-09 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-7328:


Fix Version/s: 0.24.0

> When a serializer class is missing, return null, not throw an NPE.
> --
>
> Key: HADOOP-7328
> URL: https://issues.apache.org/jira/browse/HADOOP-7328
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: io, serialization
> Fix For: 0.23.0, 0.24.0
>
> Attachments: 0.22-HADOOP-7328.r7.diff, 0.23-HADOOP-7328.r7.diff, 
> HADOOP-7328.r1.diff, HADOOP-7328.r2.diff, HADOOP-7328.r3.diff, 
> HADOOP-7328.r4.diff, HADOOP-7328.r4.diff, HADOOP-7328.r5.diff, 
> HADOOP-7328.r6.diff, HADOOP-7328.r7.diff
>
>
> When you have a key/value class that's non Writable and you forget to attach 
> io.serializers for the same, an NPE is thrown by the tasks with no 
> information on why or what's missing and what led to it. I think a better 
> exception can be thrown by SerializationFactory instead of an NPE when a 
> class is not found accepted by any of the loaded ones.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7328) When a serializer class is missing, return null, not throw an NPE.

2011-09-09 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101575#comment-13101575
 ] 

Todd Lipcon commented on HADOOP-7328:
-

ah, I see the dependent one is also ready to be committed. I committed this to 
0.23 and trunk. Can you post one against 0.20-security as well? Seems worth 
backporting (low risk, common new user mistake)

> When a serializer class is missing, return null, not throw an NPE.
> --
>
> Key: HADOOP-7328
> URL: https://issues.apache.org/jira/browse/HADOOP-7328
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: io, serialization
> Fix For: 0.23.0
>
> Attachments: 0.22-HADOOP-7328.r7.diff, 0.23-HADOOP-7328.r7.diff, 
> HADOOP-7328.r1.diff, HADOOP-7328.r2.diff, HADOOP-7328.r3.diff, 
> HADOOP-7328.r4.diff, HADOOP-7328.r4.diff, HADOOP-7328.r5.diff, 
> HADOOP-7328.r6.diff, HADOOP-7328.r7.diff
>
>
> When you have a key/value class that's non Writable and you forget to attach 
> io.serializers for the same, an NPE is thrown by the tasks with no 
> information on why or what's missing and what led to it. I think a better 
> exception can be thrown by SerializationFactory instead of an NPE when a 
> class is not found accepted by any of the loaded ones.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7328) When a serializer class is missing, return null, not throw an NPE.

2011-09-09 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101568#comment-13101568
 ] 

Todd Lipcon commented on HADOOP-7328:
-

Latest patch seems good. Does this need to go in at the same time as the 
dependent patches? Or can it be committed first, with the others later?

> When a serializer class is missing, return null, not throw an NPE.
> --
>
> Key: HADOOP-7328
> URL: https://issues.apache.org/jira/browse/HADOOP-7328
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: io, serialization
> Fix For: 0.23.0
>
> Attachments: 0.22-HADOOP-7328.r7.diff, 0.23-HADOOP-7328.r7.diff, 
> HADOOP-7328.r1.diff, HADOOP-7328.r2.diff, HADOOP-7328.r3.diff, 
> HADOOP-7328.r4.diff, HADOOP-7328.r4.diff, HADOOP-7328.r5.diff, 
> HADOOP-7328.r6.diff, HADOOP-7328.r7.diff
>
>
> When you have a key/value class that's non Writable and you forget to attach 
> io.serializers for the same, an NPE is thrown by the tasks with no 
> information on why or what's missing and what led to it. I think a better 
> exception can be thrown by SerializationFactory instead of an NPE when a 
> class is not found accepted by any of the loaded ones.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-1381) The distance between sync blocks in SequenceFiles should be configurable rather than hard coded to 2000 bytes

2011-09-09 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101565#comment-13101565
 ] 

Todd Lipcon commented on HADOOP-1381:
-

Two minor nits:
- Can you add a check in the Writer constructor that the syncInterval option is 
valid? I think the minimum value would be SYNC_SIZE?
- Can you rename SYNC_INTERVAL to DEFAULT_SYNC_INTERVAL or 
SYNC_INTERVAL_DEFAULT? Even though it's currently public, I don't think this 
would be considered a public API, so changing it seems alright.

Otherwise looks good.

> The distance between sync blocks in SequenceFiles should be configurable 
> rather than hard coded to 2000 bytes
> -
>
> Key: HADOOP-1381
> URL: https://issues.apache.org/jira/browse/HADOOP-1381
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.22.0
>Reporter: Owen O'Malley
>Assignee: Harsh J
> Fix For: 0.23.0
>
> Attachments: HADOOP-1381.r1.diff, HADOOP-1381.r2.diff, 
> HADOOP-1381.r3.diff, HADOOP-1381.r4.diff
>
>
> Currently SequenceFiles put in sync blocks every 2000 bytes. It would be much 
> better if it was configurable with a much higher default (1mb or so?).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7119) add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT web-consoles

2011-09-09 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101553#comment-13101553
 ] 

Aaron T. Myers commented on HADOOP-7119:


bq. Regarding the missing testcases, those are the Kerberos ones, a KDC setup 
is required to run them, ideally we should have them in, but if pressed with 
time I think is OK to commit as it is (exact working code from trunk) and open 
a JIRA to add them before the next maintenance release of the 2xx branch. Or, 
for the 205 release add the Kerberos testcases to the exclude list in the build.

It seems fine to me to do that as a follow-up JIRA. Could someone please file 
that?

bq. What is missing are the docs (changes to the xdocs/site.xml and the new 
hadoop-xdocs/HttpAuthentication.xml file), I think those should go in.

Agreed.

I should also mention that Alejandro and I tested this patch manually yesterday 
(with the addition of AuthenticationFilterInitializer) and it worked like a 
charm, both from curl and in Firefox. So, +1 for the back-port once the above 
are addressed.

> add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT 
> web-consoles
> --
>
> Key: HADOOP-7119
> URL: https://issues.apache.org/jira/browse/HADOOP-7119
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.23.0
> Environment: all
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7119v3.patch, HADOOP-7119v4-amendment.patch, 
> HADOOP-7119v4.patch, HADOOP-7119v5.patch, HADOOP-7119v6.patch, 
> ha-common-01.patch, ha-common-02.patch, ha-commons.patch, 
> spnego-20-security.patch, spnego-20-security2.patch
>
>
> Currently the JT/NN/DN/TT web-consoles don't support any form of 
> authentication.
> Hadoop RPC API already supports Kerberos authentication.
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to Hadoop web consoles would provide 
> a unified authentication mechanism and single sign-on for Hadoop web UI and 
> Hadoop RPC.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7343) backport HADOOP-7008 and HADOOP-7042 to branch-0.20-security

2011-09-09 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101552#comment-13101552
 ] 

Matt Foley commented on HADOOP-7343:


Please note that the Subversion Commit message is missing from this jira 
because the commit message had a typo: it used "MAPREDUCE-7343" instead of the 
correct "HADOOP-7343".  The robot that puts commit messages in Jiras is just 
driven by grep, and if the jira number is wrong, it doesn't know where to put 
the message.

Nevertheless, it was committed, as:

r1132794 | cdouglas | 2011-06-06 14:50:31 -0700 (Mon, 06 Jun 2011)

MAPREDUCE-7343. Make the number of warnings accepted by test-patch
configurable to limit false positives. Contributed by Thomas Graves


I'll correct the CHANGES.txt entry (which also used "MAPREDUCE-7343") in the 
next day or two.

> backport HADOOP-7008 and HADOOP-7042 to branch-0.20-security
> 
>
> Key: HADOOP-7343
> URL: https://issues.apache.org/jira/browse/HADOOP-7343
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.204.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Minor
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7343-20security.patch, 
> HADOOP-7343-20security.patch
>
>
> backport HADOOP-7008 and HADOOP-7042 to branch-0.20-security so that we can 
> enable test-patch.sh to have a configured number of acceptable findbugs and 
> javadoc warnings

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7343) backport HADOOP-7008 and HADOOP-7042 to branch-0.20-security

2011-09-09 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7343:
---

Fix Version/s: (was: 0.20.206.0)
   0.20.205.0

> backport HADOOP-7008 and HADOOP-7042 to branch-0.20-security
> 
>
> Key: HADOOP-7343
> URL: https://issues.apache.org/jira/browse/HADOOP-7343
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.20.204.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Minor
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7343-20security.patch, 
> HADOOP-7343-20security.patch
>
>
> backport HADOOP-7008 and HADOOP-7042 to branch-0.20-security so that we can 
> enable test-patch.sh to have a configured number of acceptable findbugs and 
> javadoc warnings

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7119) add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT web-consoles

2011-09-09 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101544#comment-13101544
 ] 

Alejandro Abdelnur commented on HADOOP-7119:


Regarding the missing testcases, those are the Kerberos ones, a KDC setup is 
required to run them, ideally we should have them in, but if pressed with time 
I think is OK to commit as it is (exact working code from trunk) and open a 
JIRA to add them before the next maintenance release of the 2xx branch. Or, for 
the 205 release add the Kerberos testcases to the exclude list in the build.

What is missing are the docs (changes to the xdocs/site.xml and the new 
hadoop-xdocs/HttpAuthentication.xml file), I think those should go in.

Besides this +1 (not binding).


> add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT 
> web-consoles
> --
>
> Key: HADOOP-7119
> URL: https://issues.apache.org/jira/browse/HADOOP-7119
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.23.0
> Environment: all
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7119v3.patch, HADOOP-7119v4-amendment.patch, 
> HADOOP-7119v4.patch, HADOOP-7119v5.patch, HADOOP-7119v6.patch, 
> ha-common-01.patch, ha-common-02.patch, ha-commons.patch, 
> spnego-20-security.patch, spnego-20-security2.patch
>
>
> Currently the JT/NN/DN/TT web-consoles don't support any form of 
> authentication.
> Hadoop RPC API already supports Kerberos authentication.
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to Hadoop web consoles would provide 
> a unified authentication mechanism and single sign-on for Hadoop web UI and 
> Hadoop RPC.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7119) add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT web-consoles

2011-09-09 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-7119:
-

Attachment: spnego-20-security2.patch

Updated path with AuthenticationFilterInitializer.java

> add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT 
> web-consoles
> --
>
> Key: HADOOP-7119
> URL: https://issues.apache.org/jira/browse/HADOOP-7119
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.23.0
> Environment: all
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7119v3.patch, HADOOP-7119v4-amendment.patch, 
> HADOOP-7119v4.patch, HADOOP-7119v5.patch, HADOOP-7119v6.patch, 
> ha-common-01.patch, ha-common-02.patch, ha-commons.patch, 
> spnego-20-security.patch, spnego-20-security2.patch
>
>
> Currently the JT/NN/DN/TT web-consoles don't support any form of 
> authentication.
> Hadoop RPC API already supports Kerberos authentication.
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to Hadoop web consoles would provide 
> a unified authentication mechanism and single sign-on for Hadoop web UI and 
> Hadoop RPC.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7620) All profiles should build the javadoc

2011-09-09 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101495#comment-13101495
 ] 

Todd Lipcon commented on HADOOP-7620:
-

I agree with Alejandro. Build turnaround time is a big pain, and especially on 
non-SSD, or even worse, NFS, the javadoc build takes quite a while. Lots of 
iops to create all those .html files.

To check that the javadoc changes don't introduce issues, we have test-patch.

> All profiles should build the javadoc
> -
>
> Key: HADOOP-7620
> URL: https://issues.apache.org/jira/browse/HADOOP-7620
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>
> Currently, the default profile doesn't generate the javadoc, which gives the 
> developer a false sense of security. Leaving the forrest stuff in the doc 
> profile makes sense.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7620) All profiles should build the javadoc

2011-09-09 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101484#comment-13101484
 ] 

Owen O'Malley commented on HADOOP-7620:
---

They don't take that much time and the developer absolutely should check the 
result of their changes to the javadoc. Having the equivalent of 
-DskipTests=true is fine, but by default it should generate the javadoc.

> All profiles should build the javadoc
> -
>
> Key: HADOOP-7620
> URL: https://issues.apache.org/jira/browse/HADOOP-7620
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>
> Currently, the default profile doesn't generate the javadoc, which gives the 
> developer a false sense of security. Leaving the forrest stuff in the doc 
> profile makes sense.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7607) Simplify the RPC proxy cleanup process

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101441#comment-13101441
 ] 

Hudson commented on HADOOP-7607:


Integrated in Hadoop-Hdfs-trunk-Commit #937 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/937/])
HADOOP-7607 and MAPREDUCE-2934. Simplify the RPC proxy cleanup process. 
(atm)

atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1167318
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/AvroRpcEngine.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcEngine.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ipc/ProtoOverHadoopRpcEngine.java


> Simplify the RPC proxy cleanup process
> --
>
> Key: HADOOP-7607
> URL: https://issues.apache.org/jira/browse/HADOOP-7607
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 0.24.0
>
> Attachments: hadoop-7607.0.patch, hadoop-7607.1.patch
>
>
> The process to clean up an RPC proxy object is to call RPC.stopProxy, which 
> looks up the RPCEngine previously associated with the interface which that 
> proxy object provides and calls RPCEngine.stopProxy passing in the proxy 
> object. Every concrete implementation of RPCEngine.stopProxy then looks up 
> the invocation handler associated with the proxy object and calls close() on 
> that invocation handler.
> This process can be simplified by cutting out the steps of looking up the 
> previously-registered RPCEngine, and instead just having RPC.stopProxy 
> directly look up the invocation handler for the proxy object and call close() 
> on it.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7620) All profiles should build the javadoc

2011-09-09 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101443#comment-13101443
 ] 

Alejandro Abdelnur commented on HADOOP-7620:


Javadocs take time, if you are doing a build for a devel test, you don't want 
to wait for javadocs.

> All profiles should build the javadoc
> -
>
> Key: HADOOP-7620
> URL: https://issues.apache.org/jira/browse/HADOOP-7620
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>
> Currently, the default profile doesn't generate the javadoc, which gives the 
> developer a false sense of security. Leaving the forrest stuff in the doc 
> profile makes sense.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7607) Simplify the RPC proxy cleanup process

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101431#comment-13101431
 ] 

Hudson commented on HADOOP-7607:


Integrated in Hadoop-Mapreduce-trunk-Commit #871 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/871/])
HADOOP-7607 and MAPREDUCE-2934. Simplify the RPC proxy cleanup process. 
(atm)

atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1167318
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/AvroRpcEngine.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcEngine.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ipc/ProtoOverHadoopRpcEngine.java


> Simplify the RPC proxy cleanup process
> --
>
> Key: HADOOP-7607
> URL: https://issues.apache.org/jira/browse/HADOOP-7607
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 0.24.0
>
> Attachments: hadoop-7607.0.patch, hadoop-7607.1.patch
>
>
> The process to clean up an RPC proxy object is to call RPC.stopProxy, which 
> looks up the RPCEngine previously associated with the interface which that 
> proxy object provides and calls RPCEngine.stopProxy passing in the proxy 
> object. Every concrete implementation of RPCEngine.stopProxy then looks up 
> the invocation handler associated with the proxy object and calls close() on 
> that invocation handler.
> This process can be simplified by cutting out the steps of looking up the 
> previously-registered RPCEngine, and instead just having RPC.stopProxy 
> directly look up the invocation handler for the proxy object and call close() 
> on it.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7607) Simplify the RPC proxy cleanup process

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101433#comment-13101433
 ] 

Hudson commented on HADOOP-7607:


Integrated in Hadoop-Common-trunk-Commit #860 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/860/])
HADOOP-7607 and MAPREDUCE-2934. Simplify the RPC proxy cleanup process. 
(atm)

atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1167318
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/AvroRpcEngine.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcEngine.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ipc/ProtoOverHadoopRpcEngine.java


> Simplify the RPC proxy cleanup process
> --
>
> Key: HADOOP-7607
> URL: https://issues.apache.org/jira/browse/HADOOP-7607
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 0.24.0
>
> Attachments: hadoop-7607.0.patch, hadoop-7607.1.patch
>
>
> The process to clean up an RPC proxy object is to call RPC.stopProxy, which 
> looks up the RPCEngine previously associated with the interface which that 
> proxy object provides and calls RPCEngine.stopProxy passing in the proxy 
> object. Every concrete implementation of RPCEngine.stopProxy then looks up 
> the invocation handler associated with the proxy object and calls close() on 
> that invocation handler.
> This process can be simplified by cutting out the steps of looking up the 
> previously-registered RPCEngine, and instead just having RPC.stopProxy 
> directly look up the invocation handler for the proxy object and call close() 
> on it.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7607) Simplify the RPC proxy cleanup process

2011-09-09 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-7607:
---

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

I've just committed this. Thanks a lot for the review, Todd.

> Simplify the RPC proxy cleanup process
> --
>
> Key: HADOOP-7607
> URL: https://issues.apache.org/jira/browse/HADOOP-7607
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 0.24.0
>
> Attachments: hadoop-7607.0.patch, hadoop-7607.1.patch
>
>
> The process to clean up an RPC proxy object is to call RPC.stopProxy, which 
> looks up the RPCEngine previously associated with the interface which that 
> proxy object provides and calls RPCEngine.stopProxy passing in the proxy 
> object. Every concrete implementation of RPCEngine.stopProxy then looks up 
> the invocation handler associated with the proxy object and calls close() on 
> that invocation handler.
> This process can be simplified by cutting out the steps of looking up the 
> previously-registered RPCEngine, and instead just having RPC.stopProxy 
> directly look up the invocation handler for the proxy object and call close() 
> on it.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7568) SequenceFile should not print into stdout

2011-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101392#comment-13101392
 ] 

Hadoop QA commented on HADOOP-7568:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12493826/HADOOP-7568.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/152//console

This message is automatically generated.

> SequenceFile should not print into stdout
> -
>
> Key: HADOOP-7568
> URL: https://issues.apache.org/jira/browse/HADOOP-7568
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
> Fix For: 0.22.0
>
> Attachments: HADOOP-7568.patch
>
>
> The following line in {{SequenceFile.Reader.initialize()}} should be removed:
> {code}
> System.out.println("Setting end to " + end);
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7620) All profiles should build the javadoc

2011-09-09 Thread Owen O'Malley (JIRA)
All profiles should build the javadoc
-

 Key: HADOOP-7620
 URL: https://issues.apache.org/jira/browse/HADOOP-7620
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Owen O'Malley


Currently, the default profile doesn't generate the javadoc, which gives the 
developer a false sense of security. Leaving the forrest stuff in the doc 
profile makes sense.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7568) SequenceFile should not print into stdout

2011-09-09 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HADOOP-7568:
-

Attachment: HADOOP-7568.patch

Patch fix.

> SequenceFile should not print into stdout
> -
>
> Key: HADOOP-7568
> URL: https://issues.apache.org/jira/browse/HADOOP-7568
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
> Fix For: 0.22.0
>
> Attachments: HADOOP-7568.patch
>
>
> The following line in {{SequenceFile.Reader.initialize()}} should be removed:
> {code}
> System.out.println("Setting end to " + end);
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7568) SequenceFile should not print into stdout

2011-09-09 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HADOOP-7568:
-

Status: Patch Available  (was: Open)

> SequenceFile should not print into stdout
> -
>
> Key: HADOOP-7568
> URL: https://issues.apache.org/jira/browse/HADOOP-7568
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
> Fix For: 0.22.0
>
> Attachments: HADOOP-7568.patch
>
>
> The following line in {{SequenceFile.Reader.initialize()}} should be removed:
> {code}
> System.out.println("Setting end to " + end);
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7510) Tokens should use original hostname provided instead of ip

2011-09-09 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101353#comment-13101353
 ] 

Kihwal Lee commented on HADOOP-7510:


I've investigated the semantics and the actual implementations of the 
{{InetSocketAddress}} API and I see no uncertainties or confusion regarding the 
particular semantics of the methods Daryn is utilizing.



> Tokens should use original hostname provided instead of ip
> --
>
> Key: HADOOP-7510
> URL: https://issues.apache.org/jira/browse/HADOOP-7510
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7510-2.patch, HADOOP-7510-3.patch, 
> HADOOP-7510-4.patch, HADOOP-7510.patch
>
>
> Tokens currently store the ip:port of the remote server.  This precludes 
> tokens from being used after a host's ip is changed.  Tokens should store the 
> hostname used to make the RPC connection.  This will enable new processes to 
> use their existing tokens.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7541) Add issuer field to delegation tokens

2011-09-09 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101339#comment-13101339
 ] 

Daryn Sharp commented on HADOOP-7541:
-

Can we re-target and consider this for trunk?

> Add issuer field to delegation tokens
> -
>
> Key: HADOOP-7541
> URL: https://issues.apache.org/jira/browse/HADOOP-7541
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-7541.patch
>
>
> Tokens currently lack traceability to its issuer.  This complicates the 
> ability to reliably renew tokens.  Tokens should have an optional issuer.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101340#comment-13101340
 ] 

Hadoop QA commented on HADOOP-7599:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12493816/HADOOP-7599-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/151//console

This message is automatically generated.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-trunk.patch, 
> HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7619) Incorrect setting JAVA_HOME variable under Cygwin on Windows

2011-09-09 Thread Eugene Kirpichov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101333#comment-13101333
 ] 

Eugene Kirpichov commented on HADOOP-7619:
--

The patch fixes two issues:
1) Path to JAVA is not cygpath'd so cygwin cannot invoke it by a Windows path.
2) Path to JAVA is not in "", so cygwin shell expands it as multiple words if 
it has spaces (which is quite often the case, e.g. when Java is in c:\program 
files\java\jdk1.6.0_17).

> Incorrect setting JAVA_HOME variable under Cygwin on Windows
> 
>
> Key: HADOOP-7619
> URL: https://issues.apache.org/jira/browse/HADOOP-7619
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.204.0
> Environment: Windows 7, Cygwin
>Reporter: Serg Melikyan
> Attachments: Hadoop.patch
>
>


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-7599:
--

Attachment: HADOOP-7599-trunk.patch

Same patch for trunk.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-trunk.patch, 
> HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7619) Incorrect setting JAVA_HOME variable under Cygwin on Windows

2011-09-09 Thread Serg Melikyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Serg Melikyan updated HADOOP-7619:
--

Attachment: Hadoop.patch

Patch

> Incorrect setting JAVA_HOME variable under Cygwin on Windows
> 
>
> Key: HADOOP-7619
> URL: https://issues.apache.org/jira/browse/HADOOP-7619
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.204.0
> Environment: Windows 7, Cygwin
>Reporter: Serg Melikyan
> Attachments: Hadoop.patch
>
>


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7541) Add issuer field to delegation tokens

2011-09-09 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HADOOP-7541:
--

   Resolution: Won't Fix
Fix Version/s: (was: 0.20.205.0)
   Status: Resolved  (was: Patch Available)

> Add issuer field to delegation tokens
> -
>
> Key: HADOOP-7541
> URL: https://issues.apache.org/jira/browse/HADOOP-7541
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-7541.patch
>
>
> Tokens currently lack traceability to its issuer.  This complicates the 
> ability to reliably renew tokens.  Tokens should have an optional issuer.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-6553) Delegation tokens get NPE when the renewer is not set

2011-09-09 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley resolved HADOOP-6553.
---

Resolution: Duplicate

HADOOP-6620 fixed this.

> Delegation tokens get NPE when the renewer is not set
> -
>
> Key: HADOOP-6553
> URL: https://issues.apache.org/jira/browse/HADOOP-6553
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>
> If a delegation token does not have a renewer set, it will cause a NPE when 
> the TokenIdentifier is serialized.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7510) Tokens should use original hostname provided instead of ip

2011-09-09 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101291#comment-13101291
 ] 

Daryn Sharp commented on HADOOP-7510:
-

This is now more academic than anything:
# I meant when the task creates the new job conf for the remote job it will 
launch, it should set the {{mapreduce.job.hdfs-servers}} key so the remote job 
tracker will acquire the tokens.  Ie. they won't be passed along.  I think the 
user has to kinit on the other side irrespective of my change, so the remote JT 
should be able to get the tokens?  This shouldn't be a deal breaker since you 
said it's a contrived use case?
# Not to beat a dead horse: The {{InetAddress}} docs are clear if/when a 
reverse lookup occurs -- when only one arg (host or ip) is given.  Give both 
and there's never a lookup.  Typically an {{InetSocketAddress}} is instantiated 
with just a host or an ip, so it instantiates a {{InetAddress}} with one arg.  
Since {{InetSocketAddress}} delegates to {{InetAddress}, lookups will occur.  
However, when an {{InetSocketAddress}} is instantiated with an {{InetAddress}} 
that was instantiated with both host and ip, no lookups will occur due to the 
delegation.  I have checked the code of multiple versions and vendor flavors of 
java and they all behave in this manner.

I added the config param last night, so would you please review the changes to 
see if they are satisfactory?

> Tokens should use original hostname provided instead of ip
> --
>
> Key: HADOOP-7510
> URL: https://issues.apache.org/jira/browse/HADOOP-7510
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7510-2.patch, HADOOP-7510-3.patch, 
> HADOOP-7510-4.patch, HADOOP-7510.patch
>
>
> Tokens currently store the ip:port of the remote server.  This precludes 
> tokens from being used after a host's ip is changed.  Tokens should store the 
> hostname used to make the RPC connection.  This will enable new processes to 
> use their existing tokens.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7619) Incorrect setting JAVA_HOME variable under Cygwin on Windows

2011-09-09 Thread Serg Melikyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Serg Melikyan updated HADOOP-7619:
--

Summary: Incorrect setting JAVA_HOME variable under Cygwin on Windows  
(was: Incorrect setting JAVA_HOME variable under Cygwin on windows)

> Incorrect setting JAVA_HOME variable under Cygwin on Windows
> 
>
> Key: HADOOP-7619
> URL: https://issues.apache.org/jira/browse/HADOOP-7619
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.204.0
> Environment: Windows 7, Cygwin
>Reporter: Serg Melikyan
>


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7619) Incorrect setting JAVA_HOME variable under Cygwin on windows

2011-09-09 Thread Serg Melikyan (JIRA)
Incorrect setting JAVA_HOME variable under Cygwin on windows


 Key: HADOOP-7619
 URL: https://issues.apache.org/jira/browse/HADOOP-7619
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.204.0
 Environment: Windows 7, Cygwin
Reporter: Serg Melikyan




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7598) smart-apply-patch.sh does not handle patching from a sub directory correctly.

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101255#comment-13101255
 ] 

Hudson commented on HADOOP-7598:


Integrated in Hadoop-Mapreduce-trunk #812 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/812/])
HADOOP-7598. Fix smart-apply-patch.sh to handle patching from a sub 
directory correctly. Contributed by Robert Evans.

acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166913
Files : 
* /hadoop/common/trunk/dev-support/smart-apply-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> smart-apply-patch.sh does not handle patching from a sub directory correctly.
> -
>
> Key: HADOOP-7598
> URL: https://issues.apache.org/jira/browse/HADOOP-7598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Fix For: 0.23.0
>
> Attachments: HADOOP-7598-v1.patch, HADOOP-7598-v2.patch, 
> HADOOP-7598-v3.patch, HADOOP-7598-v4.patch
>
>
> smart-apply-patch.sh does not apply valid patches from trunk, or from git 
> like it was designed to do in some situations.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-5814) NativeS3FileSystem doesn't report progress when writing

2011-09-09 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K reassigned HADOOP-5814:
-

Assignee: Devaraj K

> NativeS3FileSystem doesn't report progress when writing
> ---
>
> Key: HADOOP-5814
> URL: https://issues.apache.org/jira/browse/HADOOP-5814
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Tom White
>Assignee: Devaraj K
>
> This results in timeouts since the whole file is uploaded in the close 
> method. See 
> http://www.mail-archive.com/core-user@hadoop.apache.org/msg09881.html.
> One solution is to keep a reference to the Progressable passed in to the 
> NativeS3FsOutputStream's constructor, and progress it during writes, and 
> while copying the backup file to S3 in the close method.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7598) smart-apply-patch.sh does not handle patching from a sub directory correctly.

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101173#comment-13101173
 ] 

Hudson commented on HADOOP-7598:


Integrated in Hadoop-Hdfs-trunk #788 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/788/])
HADOOP-7598. Fix smart-apply-patch.sh to handle patching from a sub 
directory correctly. Contributed by Robert Evans.

acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166913
Files : 
* /hadoop/common/trunk/dev-support/smart-apply-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> smart-apply-patch.sh does not handle patching from a sub directory correctly.
> -
>
> Key: HADOOP-7598
> URL: https://issues.apache.org/jira/browse/HADOOP-7598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Fix For: 0.23.0
>
> Attachments: HADOOP-7598-v1.patch, HADOOP-7598-v2.patch, 
> HADOOP-7598-v3.patch, HADOOP-7598-v4.patch
>
>
> smart-apply-patch.sh does not apply valid patches from trunk, or from git 
> like it was designed to do in some situations.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7612) Change test-patch to run tests for all nested modules

2011-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101160#comment-13101160
 ] 

Hudson commented on HADOOP-7612:


Integrated in Hadoop-Hdfs-trunk #788 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/788/])
HADOOP-7612. Change test-patch to run tests for all nested modules.

tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166848
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> Change test-patch to run tests for all nested modules
> -
>
> Key: HADOOP-7612
> URL: https://issues.apache.org/jira/browse/HADOOP-7612
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.23.0
>
> Attachments: HADOOP-7612.patch
>
>
> HADOOP-7561 changed the behaviour of test-patch to run tests for changed 
> modules, however this was assuming a flat structure. Given the nested maven 
> hierarchy we should always run all the common tests for any common change, 
> all the HDFS tests for any HDFS change, and all the MapReduce tests for any 
> MapReduce change.
> In addition, we should do a top-level build to test compilation after any 
> change.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7618) start-dfs.sh and stop-dfs.sh are not working properly

2011-09-09 Thread Devaraj K (JIRA)
start-dfs.sh and stop-dfs.sh are not working properly
-

 Key: HADOOP-7618
 URL: https://issues.apache.org/jira/browse/HADOOP-7618
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.24.0
Reporter: Devaraj K
Assignee: Devaraj K
 Fix For: 0.24.0


When we execute start-dfs.sh, it is gving the below error.

{code:xml}
linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
 # ./start-dfs.sh
./start-dfs.sh: line 50: 
/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
 No such file or directory
Starting namenodes on []
./start-dfs.sh: line 55: 
/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
 No such file or directory
./start-dfs.sh: line 68: 
/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
 No such file or directory
Secondary namenodes are not configured.  Cannot start secondary namenodes.
{code}


It is gving the below error when we execute stop-dfs.sh.

{code:xml}
linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
 # ./stop-dfs.sh
./stop-dfs.sh: line 26: 
/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
 No such file or directory
Stopping namenodes on []
./stop-dfs.sh: line 31: 
/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
 No such file or directory
./stop-dfs.sh: line 44: 
/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
 No such file or directory
Secondary namenodes are not configured.  Cannot stop secondary namenodes.
{code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira