[jira] [Commented] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680988#comment-13680988
 ] 

Hadoop QA commented on HADOOP-9631:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12587367/HADOOP-9631.trunk.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2639//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2639//console

This message is automatically generated.

> ViewFs should use underlying FileSystem's server side defaults
> --
>
> Key: HADOOP-9631
> URL: https://issues.apache.org/jira/browse/HADOOP-9631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 2.0.4-alpha
>Reporter: Lohit Vijayarenu
> Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, 
> TestFileContext.java
>
>
> On a cluster with ViewFS as default FileSystem, creating files using 
> FileContext will always result with replication factor of 1, instead of 
> underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9526) TestShellCommandFencer and TestShell fail on Windows

2013-06-11 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu resolved HADOOP-9526.
---

Resolution: Fixed

Resolving this. The open problem should be tracked by HADOOP-9632

> TestShellCommandFencer and TestShell fail on Windows
> 
>
> Key: HADOOP-9526
> URL: https://issues.apache.org/jira/browse/HADOOP-9526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HADOOP-9526.001.patch, HADOOP-9526.002.patch, 
> HADOOP-9526-host-fix.patch
>
>
> The following TestShellCommandFencer tests fail on Windows.
> # testTargetAsEnvironment
> # testConfAsEnvironment
> # testTargetAsEnvironment
> TestShell#testInterval also fails.
> All failures look like test issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9639) truly shared cache for jars (jobjar/libjar)

2013-06-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680935#comment-13680935
 ] 

Chris Nauroth commented on HADOOP-9639:
---

Thanks, Sangjin.  That makes sense.  It sounds then like this is a feature 
request for MRv2 to leverage the full range of local resource visibility 
settings offered by YARN, and expose that to end users submitting the MR jobs.

> truly shared cache for jars (jobjar/libjar)
> ---
>
> Key: HADOOP-9639
> URL: https://issues.apache.org/jira/browse/HADOOP-9639
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: filecache
>Affects Versions: 2.0.4-alpha
>Reporter: Sangjin Lee
>
> Currently there is the distributed cache that enables you to cache jars and 
> files so that attempts from the same job can reuse them. However, sharing is 
> limited with the distributed cache because it is normally on a per-job basis. 
> On a large cluster, sometimes copying of jobjars and libjars becomes so 
> prevalent that it consumes a large portion of the network bandwidth, not to 
> speak of defeating the purpose of "bringing compute to where data is". This 
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared 
> cache so that multiple jobs from multiple users can share and cache jars. 
> This JIRA is to open the discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-06-11 Thread Lohit Vijayarenu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lohit Vijayarenu updated HADOOP-9631:
-

Status: Patch Available  (was: Open)

> ViewFs should use underlying FileSystem's server side defaults
> --
>
> Key: HADOOP-9631
> URL: https://issues.apache.org/jira/browse/HADOOP-9631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 2.0.4-alpha
>Reporter: Lohit Vijayarenu
> Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, 
> TestFileContext.java
>
>
> On a cluster with ViewFS as default FileSystem, creating files using 
> FileContext will always result with replication factor of 1, instead of 
> underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-06-11 Thread Lohit Vijayarenu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lohit Vijayarenu updated HADOOP-9631:
-

Attachment: HADOOP-9631.trunk.2.patch

While trying to see why testcases failed, realized that there is easy way to 
this. In the end each filesystem has ServerDefaults, so in viewfs we just had 
to pick right filesystem and pass it down. Made a change to use homedirectory 
path and chose underlying filesystem. Attaching new patch with this change. Now 
all viewfs tests as well as new test works.

> ViewFs should use underlying FileSystem's server side defaults
> --
>
> Key: HADOOP-9631
> URL: https://issues.apache.org/jira/browse/HADOOP-9631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 2.0.4-alpha
>Reporter: Lohit Vijayarenu
> Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, 
> TestFileContext.java
>
>
> On a cluster with ViewFS as default FileSystem, creating files using 
> FileContext will always result with replication factor of 1, instead of 
> underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-06-11 Thread Lohit Vijayarenu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lohit Vijayarenu updated HADOOP-9631:
-

Status: Open  (was: Patch Available)

> ViewFs should use underlying FileSystem's server side defaults
> --
>
> Key: HADOOP-9631
> URL: https://issues.apache.org/jira/browse/HADOOP-9631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 2.0.4-alpha
>Reporter: Lohit Vijayarenu
> Attachments: HADOOP-9631.trunk.1.patch, TestFileContext.java
>
>
> On a cluster with ViewFS as default FileSystem, creating files using 
> FileContext will always result with replication factor of 1, instead of 
> underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9638) parallel test changes caused invalid test path for several HDFS tests on Windows

2013-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680884#comment-13680884
 ] 

Hadoop QA commented on HADOOP-9638:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587345/HADOOP-9638.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2638//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2638//console

This message is automatically generated.

> parallel test changes caused invalid test path for several HDFS tests on 
> Windows
> 
>
> Key: HADOOP-9638
> URL: https://issues.apache.org/jira/browse/HADOOP-9638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9638.2.patch, HADOOP-9638.3.patch, 
> HADOOP-9638.patch
>
>
> HADOOP-9287 made changes to the tests to support running multiple tests in 
> parallel.  Part of that patch accidentally reverted a prior change to use 
> paths of the form "/tmp/" when running tests against HDFS.  On 
> Windows, use of the test root will contain a drive spec (i.e. C:\dir), and 
> the colon character is rejected as invalid by HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9517) Document Hadoop Compatibility

2013-06-11 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680883#comment-13680883
 ] 

Karthik Kambatla commented on HADOOP-9517:
--

If no one has any comments against the newly proposed policies, I ll upload a 
new patch on Thursday with the (Proposal) tags removed.

> Document Hadoop Compatibility
> -
>
> Key: HADOOP-9517
> URL: https://issues.apache.org/jira/browse/HADOOP-9517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arun C Murthy
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: hadoop-9517.patch, hadoop-9517.patch, hadoop-9517.patch, 
> hadoop-9517.patch, hadoop-9517-proposal-v1.patch, 
> hadoop-9517-proposal-v1.patch, hadoop-9517-v2.patch, hadoop-9517-v3.patch
>
>
> As we get ready to call hadoop-2 stable we need to better define 'Hadoop 
> Compatibility'.
> http://wiki.apache.org/hadoop/Compatibility is a start, let's document 
> requirements clearly and completely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9517) Document Hadoop Compatibility

2013-06-11 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680882#comment-13680882
 ] 

Karthik Kambatla commented on HADOOP-9517:
--

I have marked this a blocker for 2.1.0-beta, per conversations on the dev list. 
I think the next steps are:
# verify the doc captures all the items that affect compatibility
# the policies for the not-newly-proposed ones are accurate
# the newly proposed policies are reasonable
# improve the presentation, if need be

Will gladly incorporate any feedback.

> Document Hadoop Compatibility
> -
>
> Key: HADOOP-9517
> URL: https://issues.apache.org/jira/browse/HADOOP-9517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arun C Murthy
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: hadoop-9517.patch, hadoop-9517.patch, hadoop-9517.patch, 
> hadoop-9517.patch, hadoop-9517-proposal-v1.patch, 
> hadoop-9517-proposal-v1.patch, hadoop-9517-v2.patch, hadoop-9517-v3.patch
>
>
> As we get ready to call hadoop-2 stable we need to better define 'Hadoop 
> Compatibility'.
> http://wiki.apache.org/hadoop/Compatibility is a start, let's document 
> requirements clearly and completely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9640) RPC Congestion Control

2013-06-11 Thread Xiaobo Peng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobo Peng updated HADOOP-9640:


Description: 
Several production Hadoop cluster incidents occurred where the Namenode was 
overloaded and failed to be responsive.  This task is to improve the system to 
detect RPC congestion early, and to provide good diagnostic information for 
alerts that identify suspicious jobs/users so as to restore services quickly.

Excerpted from the communication of one incident, “The map task of a user was 
creating huge number of small files in the user directory. Due to the heavy 
load on NN, the JT also was unable to communicate with NN...The cluster became 
responsive only once the job was killed.”

Excerpted from the communication of another incident, “Namenode was overloaded 
by GetBlockLocation requests (Correction: should be getFileInfo requests. the 
job had a bug that called getFileInfo for a nonexistent file in an endless 
loop). All other requests to namenode were also affected by this and hence all 
jobs slowed down. Cluster almost came to a grinding halt…Eventually killed 
jobtracker to kill all jobs that are running.”

Excerpted from HDFS-945, “We've seen defective applications cause havoc on the 
NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories (60k 
files) etc.”


  was:
Several production Hadoop cluster incidents occurred where the Namenode was 
overloaded and failed to be responsive.  This task is to improve the system to 
detect RPC congestion early, and to provide good diagnostic information for 
alerts that identify suspicious jobs/users so as to restore services quickly.

Excerpted from the communication of one incident, “The map task of a user was 
creating huge number of small files in the user directory. Due to the heavy 
load on NN, the JT also was unable to communicate with NN...The cluster became 
responsive only once the job was killed.”

Excerpted from the communication of another incident, “Namenode was overloaded 
by GetBlockLocation requests (Correction: should be getFileInfo requests. the 
job had a bug that called getFileInfo in an endless loop). All other requests 
to namenode were also affected by this and hence all jobs slowed down. Cluster 
almost came to a grinding halt…Eventually killed jobtracker to kill all jobs 
that are running.”

Excerpted from HDFS-945, “We've seen defective applications cause havoc on the 
NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories (60k 
files) etc.”



> RPC Congestion Control
> --
>
> Key: HADOOP-9640
> URL: https://issues.apache.org/jira/browse/HADOOP-9640
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaobo Peng
>
> Several production Hadoop cluster incidents occurred where the Namenode was 
> overloaded and failed to be responsive.  This task is to improve the system 
> to detect RPC congestion early, and to provide good diagnostic information 
> for alerts that identify suspicious jobs/users so as to restore services 
> quickly.
> Excerpted from the communication of one incident, “The map task of a user was 
> creating huge number of small files in the user directory. Due to the heavy 
> load on NN, the JT also was unable to communicate with NN...The cluster 
> became responsive only once the job was killed.”
> Excerpted from the communication of another incident, “Namenode was 
> overloaded by GetBlockLocation requests (Correction: should be getFileInfo 
> requests. the job had a bug that called getFileInfo for a nonexistent file in 
> an endless loop). All other requests to namenode were also affected by this 
> and hence all jobs slowed down. Cluster almost came to a grinding 
> halt…Eventually killed jobtracker to kill all jobs that are running.”
> Excerpted from HDFS-945, “We've seen defective applications cause havoc on 
> the NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories 
> (60k files) etc.”

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9640) RPC Congestion Control

2013-06-11 Thread Xiaobo Peng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobo Peng updated HADOOP-9640:


Description: 
Several production Hadoop cluster incidents occurred where the Namenode was 
overloaded and failed to be responsive.  This task is to improve the system to 
detect RPC congestion early, and to provide good diagnostic information for 
alerts that identify suspicious jobs/users so as to restore services quickly.

Excerpted from the communication of one incident, “The map task of a user was 
creating huge number of small files in the user directory. Due to the heavy 
load on NN, the JT also was unable to communicate with NN...The cluster became 
responsive only once the job was killed.”

Excerpted from the communication of another incident, “Namenode was overloaded 
by GetBlockLocation requests (Correction: should be getFileInfo requests. the 
job had a bug that called getFileInfo in an endless loop). All other requests 
to namenode were also affected by this and hence all jobs slowed down. Cluster 
almost came to a grinding halt…Eventually killed jobtracker to kill all jobs 
that are running.”

Excerpted from HDFS-945, “We've seen defective applications cause havoc on the 
NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories (60k 
files) etc.”


  was:
Several production Hadoop cluster incidents occurred where the Namenode was 
overloaded and failed to be responsive.  This task is to improve the system to 
detect RPC congestion early, and to provide good diagnostic information for 
alerts that identify suspicious jobs/users so as to restore services quickly.

Excerpted from the communication of one incident, “The map task of a user was 
creating huge number of small files in the user directory. Due to the heavy 
load on NN, the JT also was unable to communicate with NN...The cluster became 
responsive only once the job was killed.”

Excerpted from the communication of another incident, “Namenode was overloaded 
by GetBlockLocation requests. All other requests to namenode were also affected 
by this and hence all jobs slowed down. Cluster almost came to a grinding 
halt…Eventually killed jobtracker to kill all jobs that are running.”

Excerpted from HDFS-945, “We've seen defective applications cause havoc on the 
NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories (60k 
files) etc.”



> RPC Congestion Control
> --
>
> Key: HADOOP-9640
> URL: https://issues.apache.org/jira/browse/HADOOP-9640
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaobo Peng
>
> Several production Hadoop cluster incidents occurred where the Namenode was 
> overloaded and failed to be responsive.  This task is to improve the system 
> to detect RPC congestion early, and to provide good diagnostic information 
> for alerts that identify suspicious jobs/users so as to restore services 
> quickly.
> Excerpted from the communication of one incident, “The map task of a user was 
> creating huge number of small files in the user directory. Due to the heavy 
> load on NN, the JT also was unable to communicate with NN...The cluster 
> became responsive only once the job was killed.”
> Excerpted from the communication of another incident, “Namenode was 
> overloaded by GetBlockLocation requests (Correction: should be getFileInfo 
> requests. the job had a bug that called getFileInfo in an endless loop). All 
> other requests to namenode were also affected by this and hence all jobs 
> slowed down. Cluster almost came to a grinding halt…Eventually killed 
> jobtracker to kill all jobs that are running.”
> Excerpted from HDFS-945, “We've seen defective applications cause havoc on 
> the NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories 
> (60k files) etc.”

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9517) Document Hadoop Compatibility

2013-06-11 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9517:
-

Priority: Blocker  (was: Major)

> Document Hadoop Compatibility
> -
>
> Key: HADOOP-9517
> URL: https://issues.apache.org/jira/browse/HADOOP-9517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arun C Murthy
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: hadoop-9517.patch, hadoop-9517.patch, hadoop-9517.patch, 
> hadoop-9517.patch, hadoop-9517-proposal-v1.patch, 
> hadoop-9517-proposal-v1.patch, hadoop-9517-v2.patch, hadoop-9517-v3.patch
>
>
> As we get ready to call hadoop-2 stable we need to better define 'Hadoop 
> Compatibility'.
> http://wiki.apache.org/hadoop/Compatibility is a start, let's document 
> requirements clearly and completely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9638) parallel test changes caused invalid test path for several HDFS tests on Windows

2013-06-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680862#comment-13680862
 ] 

Chris Nauroth commented on HADOOP-9638:
---

+1 for the patch, pending successful Jenkins run for the latest version.  The 
changes look good.  I verified on both Mac and Windows.  Thanks for addressing 
this, Andrey!  I'll commit this after Jenkins responds with +1 for the latest 
version.

> parallel test changes caused invalid test path for several HDFS tests on 
> Windows
> 
>
> Key: HADOOP-9638
> URL: https://issues.apache.org/jira/browse/HADOOP-9638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9638.2.patch, HADOOP-9638.3.patch, 
> HADOOP-9638.patch
>
>
> HADOOP-9287 made changes to the tests to support running multiple tests in 
> parallel.  Part of that patch accidentally reverted a prior change to use 
> paths of the form "/tmp/" when running tests against HDFS.  On 
> Windows, use of the test root will contain a drive spec (i.e. C:\dir), and 
> the colon character is rejected as invalid by HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9638) parallel test changes caused invalid test path for several HDFS tests on Windows

2013-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680851#comment-13680851
 ] 

Hadoop QA commented on HADOOP-9638:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587336/HADOOP-9638.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2637//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2637//console

This message is automatically generated.

> parallel test changes caused invalid test path for several HDFS tests on 
> Windows
> 
>
> Key: HADOOP-9638
> URL: https://issues.apache.org/jira/browse/HADOOP-9638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9638.2.patch, HADOOP-9638.3.patch, 
> HADOOP-9638.patch
>
>
> HADOOP-9287 made changes to the tests to support running multiple tests in 
> parallel.  Part of that patch accidentally reverted a prior change to use 
> paths of the form "/tmp/" when running tests against HDFS.  On 
> Windows, use of the test root will contain a drive spec (i.e. C:\dir), and 
> the colon character is rejected as invalid by HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9639) truly shared cache for jars (jobjar/libjar)

2013-06-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680847#comment-13680847
 ] 

Sangjin Lee commented on HADOOP-9639:
-

Thanks for your comment, Chris. Yes, it looks like YARN can definitely support 
what I have in mind. But it seems that it would be more a building block to get 
to what I want than the complete story (correct me if I'm wrong).

I am coming specifically from a map-reduce perspective. I'd like to be able to 
address concerns like submitting a mapreduce job but having the job jar and the 
libjars safely cached in the shared manner, the location of these jars 
referenced as part of the job classpath, adding/expiring cached jars in HDFS 
and locally on the nodes as well, etc.

It seems like the distributed cache does pretty much most of that. And if I'm 
not mistaken, it leverages the resource localization you speak of on the 
cluster side as the building block. It's only that it's limiting sharing to a 
per-job basis.

So to support this for all map-reduce jobs, it looks to me that changes to 
things like JobSubmitter and YARNRunner are needed. Please let me know if I'm 
way off. Thanks!


> truly shared cache for jars (jobjar/libjar)
> ---
>
> Key: HADOOP-9639
> URL: https://issues.apache.org/jira/browse/HADOOP-9639
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: filecache
>Affects Versions: 2.0.4-alpha
>Reporter: Sangjin Lee
>
> Currently there is the distributed cache that enables you to cache jars and 
> files so that attempts from the same job can reuse them. However, sharing is 
> limited with the distributed cache because it is normally on a per-job basis. 
> On a large cluster, sometimes copying of jobjars and libjars becomes so 
> prevalent that it consumes a large portion of the network bandwidth, not to 
> speak of defeating the purpose of "bringing compute to where data is". This 
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared 
> cache so that multiple jobs from multiple users can share and cache jars. 
> This JIRA is to open the discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-11 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680842#comment-13680842
 ] 

Luke Lu commented on HADOOP-9421:
-

bq. The abstract name appears in the stringified UGI, so it's not applicable to 
call it CHALLENGE_RESPONSE.

It's still really a mechanism name (vs KERBEROS, PLAIN etc.) and I don't think 
ID_TOKEN or SSO_TOKEN being appropriate in SaslRpcServer.AuthMethod. But I'm 
not insisting.

bq.  If we upgrade the digest mechanism for tokens, that new mechanism may also 
have an initial response

Challenge response is the main performance use case that's worth optimizing 
for. Digest-MD5 doesn't have initial response. The main (only?) potential 
replacement SCRAM doesn't use server name at all (ie, no digest-uri issue in HA 
cases). The code can be generic sasl exchange, we just take advantage of the 
fact the token based auth is automatically optimized if you send the client 
initiation with connection header. The server can simply return a negotiate 
response with server name if the server name from the initial response doesn't 
match for non-token mechanisms.

bq. Initial response is true for other mechanisms like GSSAPI and PLAIN, which 
means kerberos has just been penalized

No. For normal situation (where the client's assumed server name matches 
server's), I save a negotiation round trip. For fail over situation, server can 
simply return a negotiation with server name, so the client can reinitialize 
the saslclient with the correct servername and send the correct "initial" 
response, which is the same number of round trip as your normal case.

bq. Backing up, I thought we agreed earlier to defer reconnect optimizations to 
a future jira?

I can definitely compromise for clear trade-offs. But I'd like to make sure we 
both fully understand the implications/alternatives before moving on.

> Convert SASL to use ProtoBuf and add lengths for non-blocking processing
> 
>
> Key: HADOOP-9421
> URL: https://issues.apache.org/jira/browse/HADOOP-9421
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.0.3-alpha
>Reporter: Sanjay Radia
>Assignee: Daryn Sharp
> Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
> HADOOP-9421.patch, HADOOP-9421-v2-demo.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9638) parallel test changes caused invalid test path for several HDFS tests on Windows

2013-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680820#comment-13680820
 ] 

Hadoop QA commented on HADOOP-9638:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587315/HADOOP-9638.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 10 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2635//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2635//console

This message is automatically generated.

> parallel test changes caused invalid test path for several HDFS tests on 
> Windows
> 
>
> Key: HADOOP-9638
> URL: https://issues.apache.org/jira/browse/HADOOP-9638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9638.2.patch, HADOOP-9638.3.patch, 
> HADOOP-9638.patch
>
>
> HADOOP-9287 made changes to the tests to support running multiple tests in 
> parallel.  Part of that patch accidentally reverted a prior change to use 
> paths of the form "/tmp/" when running tests against HDFS.  On 
> Windows, use of the test root will contain a drive spec (i.e. C:\dir), and 
> the colon character is rejected as invalid by HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9638) parallel test changes caused invalid test path for several HDFS tests on Windows

2013-06-11 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated HADOOP-9638:


Attachment: HADOOP-9638.3.patch

Thanks, Chris. Updating the patch.

> parallel test changes caused invalid test path for several HDFS tests on 
> Windows
> 
>
> Key: HADOOP-9638
> URL: https://issues.apache.org/jira/browse/HADOOP-9638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9638.2.patch, HADOOP-9638.3.patch, 
> HADOOP-9638.patch
>
>
> HADOOP-9287 made changes to the tests to support running multiple tests in 
> parallel.  Part of that patch accidentally reverted a prior change to use 
> paths of the form "/tmp/" when running tests against HDFS.  On 
> Windows, use of the test root will contain a drive spec (i.e. C:\dir), and 
> the colon character is rejected as invalid by HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9389) test-patch marks -1 due to a context @Test by mistake

2013-06-11 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen resolved HADOOP-9389.
-

Resolution: Invalid

It seems that nontimeoutTests is no longer checked.

> test-patch marks -1 due to a context @Test by mistake
> -
>
> Key: HADOOP-9389
> URL: https://issues.apache.org/jira/browse/HADOOP-9389
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Attachments: HADOOP-9389_1.patch
>
>
> HADOOP-9112 enables the function of marking -1 when the newly added tests 
> don't have timeout. However, test-patch will mark -1 due to a context @Test 
> by mistake. Bellow is the problematic part of the YARN-378_3.patch that I've 
> created.
> {code}
> +}
> +  }
> +
>@Test
>public void testRMAppSubmitWithQueueAndName() throws Exception {
>  long now = System.currentTimeMillis();
> {code}
> There's a @Test without timeout (most existing tests don't have timeout) in 
> the context. In test-patch, $AWK '\{ printf "%s ", $0 \}' collapses these 
> lines into one line, i.e.,
> {code}
> +} +  } +@Testpublic void testRMAppSubmitWithQueueAndName() 
> throws Exception {  long now = System.currentTimeMillis();
> {code}
> Then, @Test in the context follows a "+", and is regarded as a newly added 
> test by mistake. Consequently, the following regex will accept the context 
> @Test. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680770#comment-13680770
 ] 

Daryn Sharp commented on HADOOP-9421:
-

bq.  As long as the mech is TOKEN (or better CHALLENGE_RESPONSE),
The authMethods are mapping an abstract name (ie. TOKEN) to the SASL mechanism. 
 With the other security designs being discussed, we might have an ID_TOKEN or 
SSO_TOKEN.  The abstract name appears in the stringified UGI, so it's not 
applicable to call it CHALLENGE_RESPONSE.

bq. why can't you instantiate the sasl client then (after receiving server 
challenge) with the info from server challenge? Why would an additional 
roundtrip be necessary unless the mech is not supported by server?

That's precisely the convoluted logic I outlined in steps 1-4.  It's a lot of 
additional complexity to pre-maturely optimize just a reconnect, and introduces 
much more complexity.  Ex. If the client's INITIATE is invalid, now the server 
can't return an fatal error and close the connection.  It has to return 
NEGOTIATE.  However a second bad INITIATE should return a fatal error.  The 
client has to know to create a SASL client on the first CHALLENGE, but not 
subsequent CHALLENGES.  Other details rapidly add up.

As opposed to INITIATE means the client and server both instantiate their SASL 
objects at that time.

bq. Trying to see why you're not seeing what I'm seeing: perhaps it's not 
obvious that SaslClient#hasInitialResponse is always false for new connection 
with token (Digest-MD5 at least, cf. rfc-2831)?

Initial response is true for other mechanisms like GSSAPI and PLAIN, which 
means kerberos has just been penalized.  If we upgrade the digest mechanism for 
tokens, that new mechanism may also have an initial response.  We can't design 
this around an internal detail of one particular mechanism (DIGEST-MD5).

Backing up, I thought we agreed earlier to defer reconnect optimizations to a 
future jira?

> Convert SASL to use ProtoBuf and add lengths for non-blocking processing
> 
>
> Key: HADOOP-9421
> URL: https://issues.apache.org/jira/browse/HADOOP-9421
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.0.3-alpha
>Reporter: Sanjay Radia
>Assignee: Daryn Sharp
> Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
> HADOOP-9421.patch, HADOOP-9421-v2-demo.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9640) RPC Congestion Control

2013-06-11 Thread Xiaobo Peng (JIRA)
Xiaobo Peng created HADOOP-9640:
---

 Summary: RPC Congestion Control
 Key: HADOOP-9640
 URL: https://issues.apache.org/jira/browse/HADOOP-9640
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xiaobo Peng


Several production Hadoop cluster incidents occurred where the Namenode was 
overloaded and failed to be responsive.  This task is to improve the system to 
detect RPC congestion early, and to provide good diagnostic information for 
alerts that identify suspicious jobs/users so as to restore services quickly.

Excerpted from the communication of one incident, “The map task of a user was 
creating huge number of small files in the user directory. Due to the heavy 
load on NN, the JT also was unable to communicate with NN...The cluster became 
responsive only once the job was killed.”

Excerpted from the communication of another incident, “Namenode was overloaded 
by GetBlockLocation requests. All other requests to namenode were also affected 
by this and hence all jobs slowed down. Cluster almost came to a grinding 
halt…Eventually killed jobtracker to kill all jobs that are running.”

Excerpted from HDFS-945, “We've seen defective applications cause havoc on the 
NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories (60k 
files) etc.”


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9638) parallel test changes caused invalid test path for several HDFS tests on Windows

2013-06-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9638:
--

Attachment: HADOOP-9638.2.patch

Thanks, Andrey.  This looks good.  I found one more thing that needed to be 
fixed for {{TestHDFSFileContextMainOperations}}.  It inherits from 
{{FileContextMainOperationsBaseTest}}, which had a hard-coded call to the 
default constructor of {{FileContextTestHelper}}.  Rather than go back and 
forth on this, I tested a change similar to what you did elsewhere, and it 
passed on Windows.  I'm attaching version 2 of the patch, which includes that 
change.

Aside from that just a couple of minor style nitpicks:

# {{FileContextTestHelper}} has some lines indented by 4 spaces.  Can you 
please switch those to 2 spaces?
# The patch has a few lines that go past the 80-character limit when creating a 
new {{FileContextTestHelper}}.  Can you please split those, wrapping at 80 
characters?


> parallel test changes caused invalid test path for several HDFS tests on 
> Windows
> 
>
> Key: HADOOP-9638
> URL: https://issues.apache.org/jira/browse/HADOOP-9638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9638.2.patch, HADOOP-9638.patch
>
>
> HADOOP-9287 made changes to the tests to support running multiple tests in 
> parallel.  Part of that patch accidentally reverted a prior change to use 
> paths of the form "/tmp/" when running tests against HDFS.  On 
> Windows, use of the test root will contain a drive spec (i.e. C:\dir), and 
> the colon character is rejected as invalid by HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9515) Add general interface for NFS and Mount

2013-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680718#comment-13680718
 ] 

Hadoop QA commented on HADOOP-9515:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587301/HADOOP-9515.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2636//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2636//console

This message is automatically generated.

> Add general interface for NFS and Mount
> ---
>
> Key: HADOOP-9515
> URL: https://issues.apache.org/jira/browse/HADOOP-9515
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-9515.1.patch, HADOOP-9515.2.patch, 
> HADOOP-9515.3.patch, HADOOP-9515.4.patch
>
>
> These is the general interface implementation for NFS and Mount protocol, 
> e.g., some protocol related data structures and etc. It doesn't include the 
> file system specific implementations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680695#comment-13680695
 ] 

Hadoop QA commented on HADOOP-9631:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12587250/HADOOP-9631.trunk.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs
  org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
  org.apache.hadoop.fs.viewfs.TestViewFsHdfs
  org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2632//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2632//console

This message is automatically generated.

> ViewFs should use underlying FileSystem's server side defaults
> --
>
> Key: HADOOP-9631
> URL: https://issues.apache.org/jira/browse/HADOOP-9631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 2.0.4-alpha
>Reporter: Lohit Vijayarenu
> Attachments: HADOOP-9631.trunk.1.patch, TestFileContext.java
>
>
> On a cluster with ViewFS as default FileSystem, creating files using 
> FileContext will always result with replication factor of 1, instead of 
> underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-11 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9625:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the contribution Paul. This is committed to trunk, branch-2 and 
branch-2.1-beta

> HADOOP_OPTS not picked up by hadoop command
> ---
>
> Key: HADOOP-9625
> URL: https://issues.apache.org/jira/browse/HADOOP-9625
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin, conf
>Affects Versions: 2.0.3-alpha, 2.0.4-alpha
>Reporter: Paul Han
>Priority: Minor
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
> HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
> are those non-backward-compatible changes. This JIRA is to fix one of those 
> changes:
>   HADOOP_OPTS is not picked up any more by hadoop command
> With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
> 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
> export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
> We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira



[jira] [Updated] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-11 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9625:


Fix Version/s: (was: 2.0.5-alpha)
   2.1.0-beta

> HADOOP_OPTS not picked up by hadoop command
> ---
>
> Key: HADOOP-9625
> URL: https://issues.apache.org/jira/browse/HADOOP-9625
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin, conf
>Affects Versions: 2.0.3-alpha, 2.0.4-alpha
>Reporter: Paul Han
>Priority: Minor
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
> HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
> are those non-backward-compatible changes. This JIRA is to fix one of those 
> changes:
>   HADOOP_OPTS is not picked up any more by hadoop command
> With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
> 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
> export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
> We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9532) HADOOP_CLIENT_OPTS is appended twice by Windows cmd scripts

2013-06-11 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9532:


Fix Version/s: (was: 3.0.0)
   2.1.0-beta

> HADOOP_CLIENT_OPTS is appended twice by Windows cmd scripts
> ---
>
> Key: HADOOP-9532
> URL: https://issues.apache.org/jira/browse/HADOOP-9532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9532.1.patch
>
>
> This problem was reported initially for the shell scripts in HADOOP-9455.  
> This issue tracks the same problem for the Windows cmd scripts.  Appending 
> HADOOP_CIENT_OPTS twice can cause an incorrect JVM launch, particularly if 
> trying to set remote debugging flags.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9527) TestLocalFSFileContextSymlink is broken on Windows

2013-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680661#comment-13680661
 ] 

Hadoop QA commented on HADOOP-9527:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587300/HADOOP-9527.009.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2634//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2634//console

This message is automatically generated.

> TestLocalFSFileContextSymlink is broken on Windows
> --
>
> Key: HADOOP-9527
> URL: https://issues.apache.org/jira/browse/HADOOP-9527
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.3.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-9527.001.patch, HADOOP-9527.002.patch, 
> HADOOP-9527.003.patch, HADOOP-9527.004.patch, HADOOP-9527.005.patch, 
> HADOOP-9527.006.patch, HADOOP-9527.007.patch, HADOOP-9527.008.patch, 
> HADOOP-9527.009.patch, RenameLink.java
>
>
> Multiple test cases are broken. I didn't look at each failure in detail.
> The main cause of the failures appears to be that RawLocalFS.readLink() does 
> not work on Windows. We need "winutils readlink" to fix the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9635) Fix Potential Stack Overflow in DomainSocket.c

2013-06-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680660#comment-13680660
 ] 

Colin Patrick McCabe commented on HADOOP-9635:
--

committed to branch-2.1-beta.

> Fix Potential Stack Overflow in DomainSocket.c
> --
>
> Key: HADOOP-9635
> URL: https://issues.apache.org/jira/browse/HADOOP-9635
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.3.0
> Environment: OSX 10.8
>Reporter: V. Karthik Kumar
>  Labels: patch, security
> Fix For: 2.3.0
>
> Attachments: 
> 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch
>
>
> When I was running on OSX, the DataNode was segfaulting. On investigation, it 
> was tracked down to this code. A potential stack overflow was also 
> identified. 
> {code}
>utfLength = (*env)->GetStringUTFLength(env, jstr);
>if (utfLength > sizeof(path)) {
>  jthr = newIOException(env, "path is too long!  We expected a path "
>  "no longer than %zd UTF-8 bytes.", sizeof(path));
>  goto done;
>}
>   // GetStringUTFRegion does not pad with NUL
>(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path);
> ...
>   //strtok_r can set rest pointer to NULL when no tokens found.
>   //Causes JVM to crash in rest[0]
>for (check[0] = '/', check[1] = '\0', rest = path, token = "";
>token && rest[0];
> token = strtok_r(rest, "/", &rest)) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9638) parallel test changes caused invalid test path for several HDFS tests on Windows

2013-06-11 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated HADOOP-9638:


Status: Patch Available  (was: Open)

> parallel test changes caused invalid test path for several HDFS tests on 
> Windows
> 
>
> Key: HADOOP-9638
> URL: https://issues.apache.org/jira/browse/HADOOP-9638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9638.patch
>
>
> HADOOP-9287 made changes to the tests to support running multiple tests in 
> parallel.  Part of that patch accidentally reverted a prior change to use 
> paths of the form "/tmp/" when running tests against HDFS.  On 
> Windows, use of the test root will contain a drive spec (i.e. C:\dir), and 
> the colon character is rejected as invalid by HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9635) Fix Potential Stack Overflow in DomainSocket.c

2013-06-11 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680658#comment-13680658
 ] 

Arun C Murthy commented on HADOOP-9635:
---

[~cmccabe] Can you please merge this into branch-2.1-beta too? Seems like an 
important fix. Thanks!

> Fix Potential Stack Overflow in DomainSocket.c
> --
>
> Key: HADOOP-9635
> URL: https://issues.apache.org/jira/browse/HADOOP-9635
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.3.0
> Environment: OSX 10.8
>Reporter: V. Karthik Kumar
>  Labels: patch, security
> Fix For: 2.3.0
>
> Attachments: 
> 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch
>
>
> When I was running on OSX, the DataNode was segfaulting. On investigation, it 
> was tracked down to this code. A potential stack overflow was also 
> identified. 
> {code}
>utfLength = (*env)->GetStringUTFLength(env, jstr);
>if (utfLength > sizeof(path)) {
>  jthr = newIOException(env, "path is too long!  We expected a path "
>  "no longer than %zd UTF-8 bytes.", sizeof(path));
>  goto done;
>}
>   // GetStringUTFRegion does not pad with NUL
>(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path);
> ...
>   //strtok_r can set rest pointer to NULL when no tokens found.
>   //Causes JVM to crash in rest[0]
>for (check[0] = '/', check[1] = '\0', rest = path, token = "";
>token && rest[0];
> token = strtok_r(rest, "/", &rest)) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9638) parallel test changes caused invalid test path for several HDFS tests on Windows

2013-06-11 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated HADOOP-9638:


Attachment: HADOOP-9638.patch

[~cnauroth], please review and help to test the patch I'm attaching. Thanks for 
catching this.

> parallel test changes caused invalid test path for several HDFS tests on 
> Windows
> 
>
> Key: HADOOP-9638
> URL: https://issues.apache.org/jira/browse/HADOOP-9638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9638.patch
>
>
> HADOOP-9287 made changes to the tests to support running multiple tests in 
> parallel.  Part of that patch accidentally reverted a prior change to use 
> paths of the form "/tmp/" when running tests against HDFS.  On 
> Windows, use of the test root will contain a drive spec (i.e. C:\dir), and 
> the colon character is rejected as invalid by HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9635) Fix Potential Stack Overflow in DomainSocket.c

2013-06-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680638#comment-13680638
 ] 

Colin Patrick McCabe commented on HADOOP-9635:
--

committed to trunk and branch-2

> Fix Potential Stack Overflow in DomainSocket.c
> --
>
> Key: HADOOP-9635
> URL: https://issues.apache.org/jira/browse/HADOOP-9635
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.3.0
> Environment: OSX 10.8
>Reporter: V. Karthik Kumar
>  Labels: patch, security
> Fix For: 2.3.0
>
> Attachments: 
> 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch
>
>
> When I was running on OSX, the DataNode was segfaulting. On investigation, it 
> was tracked down to this code. A potential stack overflow was also 
> identified. 
> {code}
>utfLength = (*env)->GetStringUTFLength(env, jstr);
>if (utfLength > sizeof(path)) {
>  jthr = newIOException(env, "path is too long!  We expected a path "
>  "no longer than %zd UTF-8 bytes.", sizeof(path));
>  goto done;
>}
>   // GetStringUTFRegion does not pad with NUL
>(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path);
> ...
>   //strtok_r can set rest pointer to NULL when no tokens found.
>   //Causes JVM to crash in rest[0]
>for (check[0] = '/', check[1] = '\0', rest = path, token = "";
>token && rest[0];
> token = strtok_r(rest, "/", &rest)) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9635) Fix Potential Stack Overflow in DomainSocket.c

2013-06-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9635:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix Potential Stack Overflow in DomainSocket.c
> --
>
> Key: HADOOP-9635
> URL: https://issues.apache.org/jira/browse/HADOOP-9635
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.3.0
> Environment: OSX 10.8
>Reporter: V. Karthik Kumar
>  Labels: patch, security
> Fix For: 2.3.0
>
> Attachments: 
> 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch
>
>
> When I was running on OSX, the DataNode was segfaulting. On investigation, it 
> was tracked down to this code. A potential stack overflow was also 
> identified. 
> {code}
>utfLength = (*env)->GetStringUTFLength(env, jstr);
>if (utfLength > sizeof(path)) {
>  jthr = newIOException(env, "path is too long!  We expected a path "
>  "no longer than %zd UTF-8 bytes.", sizeof(path));
>  goto done;
>}
>   // GetStringUTFRegion does not pad with NUL
>(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path);
> ...
>   //strtok_r can set rest pointer to NULL when no tokens found.
>   //Causes JVM to crash in rest[0]
>for (check[0] = '/', check[1] = '\0', rest = path, token = "";
>token && rest[0];
> token = strtok_r(rest, "/", &rest)) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-11 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680634#comment-13680634
 ] 

Arpit Gupta commented on HADOOP-9625:
-

Once done will merge it to branch-2 and 2.1.0-beta

> HADOOP_OPTS not picked up by hadoop command
> ---
>
> Key: HADOOP-9625
> URL: https://issues.apache.org/jira/browse/HADOOP-9625
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin, conf
>Affects Versions: 2.0.3-alpha, 2.0.4-alpha
>Reporter: Paul Han
>Priority: Minor
> Fix For: 2.0.5-alpha
>
> Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
> HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
> are those non-backward-compatible changes. This JIRA is to fix one of those 
> changes:
>   HADOOP_OPTS is not picked up any more by hadoop command
> With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
> 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
> export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
> We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-11 Thread Paul Han (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680636#comment-13680636
 ] 

Paul Han commented on HADOOP-9625:
--

Thank you, Arpit!

> HADOOP_OPTS not picked up by hadoop command
> ---
>
> Key: HADOOP-9625
> URL: https://issues.apache.org/jira/browse/HADOOP-9625
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin, conf
>Affects Versions: 2.0.3-alpha, 2.0.4-alpha
>Reporter: Paul Han
>Priority: Minor
> Fix For: 2.0.5-alpha
>
> Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
> HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
> are those non-backward-compatible changes. This JIRA is to fix one of those 
> changes:
>   HADOOP_OPTS is not picked up any more by hadoop command
> With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
> 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
> export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
> We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-11 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680632#comment-13680632
 ] 

Arpit Gupta commented on HADOOP-9625:
-

yes i am in the process of that. We need to merge HADOOP-9532 to those branches 
first so your patch can apply correctly.

> HADOOP_OPTS not picked up by hadoop command
> ---
>
> Key: HADOOP-9625
> URL: https://issues.apache.org/jira/browse/HADOOP-9625
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin, conf
>Affects Versions: 2.0.3-alpha, 2.0.4-alpha
>Reporter: Paul Han
>Priority: Minor
> Fix For: 2.0.5-alpha
>
> Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
> HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
> are those non-backward-compatible changes. This JIRA is to fix one of those 
> changes:
>   HADOOP_OPTS is not picked up any more by hadoop command
> With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
> 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
> export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
> We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-11 Thread Paul Han (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680628#comment-13680628
 ] 

Paul Han commented on HADOOP-9625:
--

Thanks Arpit for merging it to trunk!

Is it possible for you to apply it to branch 2x as well? That way the patch is 
available when we sync our build with 2.0.x releases.


> HADOOP_OPTS not picked up by hadoop command
> ---
>
> Key: HADOOP-9625
> URL: https://issues.apache.org/jira/browse/HADOOP-9625
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin, conf
>Affects Versions: 2.0.3-alpha, 2.0.4-alpha
>Reporter: Paul Han
>Priority: Minor
> Fix For: 2.0.5-alpha
>
> Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
> HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
> are those non-backward-compatible changes. This JIRA is to fix one of those 
> changes:
>   HADOOP_OPTS is not picked up any more by hadoop command
> With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
> 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
> export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
> We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9639) truly shared cache for jars (jobjar/libjar)

2013-06-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680627#comment-13680627
 ] 

Chris Nauroth commented on HADOOP-9639:
---

Hi, Sangjin.  Does YARN's concept of localized resources already address the 
use cases that you have in mind?

http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html#How_can_I_distribute_my_applications_jars_to_all_of_the_nodes_in_the_YARN_cluster_that_need_it

There is also the capability to control visibility, such that a localized 
resource cached on a node could be reused across all applications executed by 
the same user, or even public visibility for wide open sharing.

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/api/records/LocalResourceVisibility.html


> truly shared cache for jars (jobjar/libjar)
> ---
>
> Key: HADOOP-9639
> URL: https://issues.apache.org/jira/browse/HADOOP-9639
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: filecache
>Affects Versions: 2.0.4-alpha
>Reporter: Sangjin Lee
>
> Currently there is the distributed cache that enables you to cache jars and 
> files so that attempts from the same job can reuse them. However, sharing is 
> limited with the distributed cache because it is normally on a per-job basis. 
> On a large cluster, sometimes copying of jobjars and libjars becomes so 
> prevalent that it consumes a large portion of the network bandwidth, not to 
> speak of defeating the purpose of "bringing compute to where data is". This 
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared 
> cache so that multiple jobs from multiple users can share and cache jars. 
> This JIRA is to open the discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9515) Add general interface for NFS and Mount

2013-06-11 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-9515:
---

Attachment: HADOOP-9515.4.patch

> Add general interface for NFS and Mount
> ---
>
> Key: HADOOP-9515
> URL: https://issues.apache.org/jira/browse/HADOOP-9515
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-9515.1.patch, HADOOP-9515.2.patch, 
> HADOOP-9515.3.patch, HADOOP-9515.4.patch
>
>
> These is the general interface implementation for NFS and Mount protocol, 
> e.g., some protocol related data structures and etc. It doesn't include the 
> file system specific implementations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9515) Add general interface for NFS and Mount

2013-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680621#comment-13680621
 ] 

Hadoop QA commented on HADOOP-9515:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587294/HADOOP-9515.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2633//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2633//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-nfs.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2633//console

This message is automatically generated.

> Add general interface for NFS and Mount
> ---
>
> Key: HADOOP-9515
> URL: https://issues.apache.org/jira/browse/HADOOP-9515
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-9515.1.patch, HADOOP-9515.2.patch, 
> HADOOP-9515.3.patch
>
>
> These is the general interface implementation for NFS and Mount protocol, 
> e.g., some protocol related data structures and etc. It doesn't include the 
> file system specific implementations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9527) TestLocalFSFileContextSymlink is broken on Windows

2013-06-11 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9527:
--

Attachment: HADOOP-9527.009.patch

Ivan, we use the parent directory of the symlink since the link needs to be 
resolvable at creation. Running the symlink command from the test's 'current 
directory' will not work.

Attached patch to fix the readLink signature.

Thanks for reviewing.

> TestLocalFSFileContextSymlink is broken on Windows
> --
>
> Key: HADOOP-9527
> URL: https://issues.apache.org/jira/browse/HADOOP-9527
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.3.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-9527.001.patch, HADOOP-9527.002.patch, 
> HADOOP-9527.003.patch, HADOOP-9527.004.patch, HADOOP-9527.005.patch, 
> HADOOP-9527.006.patch, HADOOP-9527.007.patch, HADOOP-9527.008.patch, 
> HADOOP-9527.009.patch, RenameLink.java
>
>
> Multiple test cases are broken. I didn't look at each failure in detail.
> The main cause of the failures appears to be that RawLocalFS.readLink() does 
> not work on Windows. We need "winutils readlink" to fix the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9635) Fix Potential Stack Overflow in DomainSocket.c

2013-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680607#comment-13680607
 ] 

Hudson commented on HADOOP-9635:


Integrated in Hadoop-trunk-Commit #3898 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3898/])
HADOOP-9635. Fix potential stack overflow in DomainSocket.c (V. Karthik 
Kumar via cmccabe) (Revision 1491927)

 Result = SUCCESS
cmccabe : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491927
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix Potential Stack Overflow in DomainSocket.c
> --
>
> Key: HADOOP-9635
> URL: https://issues.apache.org/jira/browse/HADOOP-9635
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.3.0
> Environment: OSX 10.8
>Reporter: V. Karthik Kumar
>  Labels: patch, security
> Fix For: 2.3.0
>
> Attachments: 
> 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch
>
>
> When I was running on OSX, the DataNode was segfaulting. On investigation, it 
> was tracked down to this code. A potential stack overflow was also 
> identified. 
> {code}
>utfLength = (*env)->GetStringUTFLength(env, jstr);
>if (utfLength > sizeof(path)) {
>  jthr = newIOException(env, "path is too long!  We expected a path "
>  "no longer than %zd UTF-8 bytes.", sizeof(path));
>  goto done;
>}
>   // GetStringUTFRegion does not pad with NUL
>(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path);
> ...
>   //strtok_r can set rest pointer to NULL when no tokens found.
>   //Causes JVM to crash in rest[0]
>for (check[0] = '/', check[1] = '\0', rest = path, token = "";
>token && rest[0];
> token = strtok_r(rest, "/", &rest)) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9639) truly shared cache for jars (jobjar/libjar)

2013-06-11 Thread Sangjin Lee (JIRA)
Sangjin Lee created HADOOP-9639:
---

 Summary: truly shared cache for jars (jobjar/libjar)
 Key: HADOOP-9639
 URL: https://issues.apache.org/jira/browse/HADOOP-9639
 Project: Hadoop Common
  Issue Type: New Feature
  Components: filecache
Affects Versions: 2.0.4-alpha
Reporter: Sangjin Lee


Currently there is the distributed cache that enables you to cache jars and 
files so that attempts from the same job can reuse them. However, sharing is 
limited with the distributed cache because it is normally on a per-job basis. 
On a large cluster, sometimes copying of jobjars and libjars becomes so 
prevalent that it consumes a large portion of the network bandwidth, not to 
speak of defeating the purpose of "bringing compute to where data is". This is 
wasteful because in most cases code doesn't change much across many jobs.

I'd like to propose and discuss feasibility of introducing a truly shared cache 
so that multiple jobs from multiple users can share and cache jars. This JIRA 
is to open the discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9635) Fix Potential Stack Overflow in DomainSocket.c

2013-06-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9635:
-

Summary: Fix Potential Stack Overflow in DomainSocket.c  (was: Potential 
Stack Overflow in DomainSocket.c)

> Fix Potential Stack Overflow in DomainSocket.c
> --
>
> Key: HADOOP-9635
> URL: https://issues.apache.org/jira/browse/HADOOP-9635
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.3.0
> Environment: OSX 10.8
>Reporter: V. Karthik Kumar
>  Labels: patch, security
> Fix For: 2.3.0
>
> Attachments: 
> 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch
>
>
> When I was running on OSX, the DataNode was segfaulting. On investigation, it 
> was tracked down to this code. A potential stack overflow was also 
> identified. 
> {code}
>utfLength = (*env)->GetStringUTFLength(env, jstr);
>if (utfLength > sizeof(path)) {
>  jthr = newIOException(env, "path is too long!  We expected a path "
>  "no longer than %zd UTF-8 bytes.", sizeof(path));
>  goto done;
>}
>   // GetStringUTFRegion does not pad with NUL
>(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path);
> ...
>   //strtok_r can set rest pointer to NULL when no tokens found.
>   //Causes JVM to crash in rest[0]
>for (check[0] = '/', check[1] = '\0', rest = path, token = "";
>token && rest[0];
> token = strtok_r(rest, "/", &rest)) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9515) Add general interface for NFS and Mount

2013-06-11 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-9515:
---

Attachment: HADOOP-9515.3.patch

> Add general interface for NFS and Mount
> ---
>
> Key: HADOOP-9515
> URL: https://issues.apache.org/jira/browse/HADOOP-9515
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-9515.1.patch, HADOOP-9515.2.patch, 
> HADOOP-9515.3.patch
>
>
> These is the general interface implementation for NFS and Mount protocol, 
> e.g., some protocol related data structures and etc. It doesn't include the 
> file system specific implementations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-06-11 Thread Lohit Vijayarenu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lohit Vijayarenu updated HADOOP-9631:
-

Status: Patch Available  (was: Open)

> ViewFs should use underlying FileSystem's server side defaults
> --
>
> Key: HADOOP-9631
> URL: https://issues.apache.org/jira/browse/HADOOP-9631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 2.0.4-alpha
>Reporter: Lohit Vijayarenu
> Attachments: HADOOP-9631.trunk.1.patch, TestFileContext.java
>
>
> On a cluster with ViewFS as default FileSystem, creating files using 
> FileContext will always result with replication factor of 1, instead of 
> underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680568#comment-13680568
 ] 

Hudson commented on HADOOP-9625:


Integrated in Hadoop-trunk-Commit #3896 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3896/])
HADOOP-9625. HADOOP_OPTS not picked up by hadoop command. Contributed by 
Paul Han (Revision 1491907)

 Result = SUCCESS
arpit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491907
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.cmd
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh


> HADOOP_OPTS not picked up by hadoop command
> ---
>
> Key: HADOOP-9625
> URL: https://issues.apache.org/jira/browse/HADOOP-9625
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin, conf
>Affects Versions: 2.0.3-alpha, 2.0.4-alpha
>Reporter: Paul Han
>Priority: Minor
> Fix For: 2.0.5-alpha
>
> Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
> HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
> are those non-backward-compatible changes. This JIRA is to fix one of those 
> changes:
>   HADOOP_OPTS is not picked up any more by hadoop command
> With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
> 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
> export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
> We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9638) parallel test changes caused invalid test path for several HDFS tests on Windows

2013-06-11 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov reassigned HADOOP-9638:
---

Assignee: Andrey Klochkov

> parallel test changes caused invalid test path for several HDFS tests on 
> Windows
> 
>
> Key: HADOOP-9638
> URL: https://issues.apache.org/jira/browse/HADOOP-9638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Andrey Klochkov
>
> HADOOP-9287 made changes to the tests to support running multiple tests in 
> parallel.  Part of that patch accidentally reverted a prior change to use 
> paths of the form "/tmp/" when running tests against HDFS.  On 
> Windows, use of the test root will contain a drive spec (i.e. C:\dir), and 
> the colon character is rejected as invalid by HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9635) Potential Stack Overflow in DomainSocket.c

2013-06-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680522#comment-13680522
 ] 

Colin Patrick McCabe commented on HADOOP-9635:
--

+1.  Thanks, V. Karthik.

> Potential Stack Overflow in DomainSocket.c
> --
>
> Key: HADOOP-9635
> URL: https://issues.apache.org/jira/browse/HADOOP-9635
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.3.0
> Environment: OSX 10.8
>Reporter: V. Karthik Kumar
>  Labels: patch, security
> Fix For: 2.3.0
>
> Attachments: 
> 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch
>
>
> When I was running on OSX, the DataNode was segfaulting. On investigation, it 
> was tracked down to this code. A potential stack overflow was also 
> identified. 
> {code}
>utfLength = (*env)->GetStringUTFLength(env, jstr);
>if (utfLength > sizeof(path)) {
>  jthr = newIOException(env, "path is too long!  We expected a path "
>  "no longer than %zd UTF-8 bytes.", sizeof(path));
>  goto done;
>}
>   // GetStringUTFRegion does not pad with NUL
>(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path);
> ...
>   //strtok_r can set rest pointer to NULL when no tokens found.
>   //Causes JVM to crash in rest[0]
>for (check[0] = '/', check[1] = '\0', rest = path, token = "";
>token && rest[0];
> token = strtok_r(rest, "/", &rest)) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9638) parallel test changes caused invalid test path for several HDFS tests on Windows

2013-06-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680502#comment-13680502
 ] 

Chris Nauroth commented on HADOOP-9638:
---

HADOOP-9287 changed {{FileSystemTestHelper}} to allow passing an override of 
the test root to its constructor.  This supports overriding to paths of the 
form /tmp/ for HDFS.  It looks like we missed 
{{FileContextTestHelper}} though, so we'll likely need similar changes for 
that.  An example of a failing test is {{TestFcHdfsCreateMkdir}}.

[~aklochkov], are you interested in doing a follow-up patch?  If not, I can 
take it.  I'm also happy to help with testing.

> parallel test changes caused invalid test path for several HDFS tests on 
> Windows
> 
>
> Key: HADOOP-9638
> URL: https://issues.apache.org/jira/browse/HADOOP-9638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>
> HADOOP-9287 made changes to the tests to support running multiple tests in 
> parallel.  Part of that patch accidentally reverted a prior change to use 
> paths of the form "/tmp/" when running tests against HDFS.  On 
> Windows, use of the test root will contain a drive spec (i.e. C:\dir), and 
> the colon character is rejected as invalid by HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9287) Parallel testing hadoop-common

2013-06-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680496#comment-13680496
 ] 

Chris Nauroth commented on HADOOP-9287:
---

I just submitted HADOOP-9638 for a regression that I discovered related to this 
patch.

> Parallel testing hadoop-common
> --
>
> Key: HADOOP-9287
> URL: https://issues.apache.org/jira/browse/HADOOP-9287
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Tsuyoshi OZAWA
>Assignee: Andrey Klochkov
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HADOOP-9287.1.patch, HADOOP-9287-branch-2--N1.patch, 
> HADOOP-9287--N3.patch, HADOOP-9287--N3.patch, HADOOP-9287--N4.patch, 
> HADOOP-9287--N5.patch, HADOOP-9287--N6.patch, HADOOP-9287--N7.patch, 
> HADOOP-9287.patch, HADOOP-9287.patch
>
>
> The maven surefire plugin supports parallel testing feature. By using it, the 
> tests can be run more faster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9638) parallel test changes caused invalid test path for several HDFS tests on Windows

2013-06-11 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-9638:
-

 Summary: parallel test changes caused invalid test path for 
several HDFS tests on Windows
 Key: HADOOP-9638
 URL: https://issues.apache.org/jira/browse/HADOOP-9638
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chris Nauroth


HADOOP-9287 made changes to the tests to support running multiple tests in 
parallel.  Part of that patch accidentally reverted a prior change to use paths 
of the form "/tmp/" when running tests against HDFS.  On Windows, 
use of the test root will contain a drive spec (i.e. C:\dir), and the colon 
character is rejected as invalid by HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-06-11 Thread Lohit Vijayarenu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lohit Vijayarenu updated HADOOP-9631:
-

Attachment: HADOOP-9631.trunk.1.patch

Attached is patch which deprecates getServerDefaults() and adds 
getServerDefaults(Path). Tested this by deploying on one of our YARN cluster 
and could see App logs getting created with replication factor of 3 instead of 
1.

> ViewFs should use underlying FileSystem's server side defaults
> --
>
> Key: HADOOP-9631
> URL: https://issues.apache.org/jira/browse/HADOOP-9631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 2.0.4-alpha
>Reporter: Lohit Vijayarenu
> Attachments: HADOOP-9631.trunk.1.patch, TestFileContext.java
>
>
> On a cluster with ViewFS as default FileSystem, creating files using 
> FileContext will always result with replication factor of 1, instead of 
> underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9581) hadoop --config non-existent directory should result in error

2013-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680399#comment-13680399
 ] 

Hudson commented on HADOOP-9581:


Integrated in Hadoop-Mapreduce-trunk #1454 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1454/])
HADOOP-9581. hadoop --config non-existent directory should result in error. 
Contributed by Ashwin Shankar (Revision 1491548)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491548
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh


> hadoop --config non-existent directory should result in error 
> --
>
> Key: HADOOP-9581
> URL: https://issues.apache.org/jira/browse/HADOOP-9581
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
>Reporter: Ashwin Shankar
>Assignee: Ashwin Shankar
> Fix For: 2.1.0-beta, 0.23.9
>
> Attachments: HADOOP-9581.txt
>
>
> Courtesy : [~cwchung]
> {quote}Providing a non-existent config directory should result in error.
> $ hadoop dfs -ls /  : shows Hadoop DFS directory
> $ hadoop --config bad_config_dir dfs -ls : successful, showing Linux directory
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9630) Remove IpcSerializationType

2013-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680400#comment-13680400
 ] 

Hudson commented on HADOOP-9630:


Integrated in Hadoop-Mapreduce-trunk #1454 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1454/])
HADOOP-9630. [RPC v9] Remove IpcSerializationType. (Junping Du via llu) 
(Revision 1491682)

 Result = SUCCESS
llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491682
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


> Remove IpcSerializationType
> ---
>
> Key: HADOOP-9630
> URL: https://issues.apache.org/jira/browse/HADOOP-9630
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Luke Lu
>Assignee: Junping Du
>  Labels: rpc
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9630.patch
>
>
> IpcSerializationType is assumed to be protobuf for the forseeable future. Not 
> to be confused with RpcKind which still supports different RpcEngines. Let's 
> remove the dead code, which can be confusing to maintain.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9604) Wrong Javadoc of FSDataOutputStream

2013-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680398#comment-13680398
 ] 

Hudson commented on HADOOP-9604:


Integrated in Hadoop-Mapreduce-trunk #1454 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1454/])
HADOOP-9604. Javadoc of FSDataOutputStream is slightly inaccurate. 
Contributed by Jingguo Yao. (Revision 1491668)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491668
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java


> Wrong Javadoc of FSDataOutputStream
> ---
>
> Key: HADOOP-9604
> URL: https://issues.apache.org/jira/browse/HADOOP-9604
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 1.0.4
>Reporter: Jingguo Yao
>Assignee: Jingguo Yao
>Priority: Minor
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9604.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> The following Javadoc of FSDataOutputStream is wrong.
> {quote}
>   buffers output through a \{@link BufferedOutputStream\} and creates a 
> checksum file.
> {quote}
> FSDataOutputStream has nothing to do with a BufferedOutputStream. Neither it 
> create a checksum file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9604) Wrong Javadoc of FSDataOutputStream

2013-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680360#comment-13680360
 ] 

Hudson commented on HADOOP-9604:


Integrated in Hadoop-Yarn-trunk #237 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/237/])
HADOOP-9604. Javadoc of FSDataOutputStream is slightly inaccurate. 
Contributed by Jingguo Yao. (Revision 1491668)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491668
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java


> Wrong Javadoc of FSDataOutputStream
> ---
>
> Key: HADOOP-9604
> URL: https://issues.apache.org/jira/browse/HADOOP-9604
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 1.0.4
>Reporter: Jingguo Yao
>Assignee: Jingguo Yao
>Priority: Minor
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9604.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> The following Javadoc of FSDataOutputStream is wrong.
> {quote}
>   buffers output through a \{@link BufferedOutputStream\} and creates a 
> checksum file.
> {quote}
> FSDataOutputStream has nothing to do with a BufferedOutputStream. Neither it 
> create a checksum file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9630) Remove IpcSerializationType

2013-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680362#comment-13680362
 ] 

Hudson commented on HADOOP-9630:


Integrated in Hadoop-Yarn-trunk #237 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/237/])
HADOOP-9630. [RPC v9] Remove IpcSerializationType. (Junping Du via llu) 
(Revision 1491682)

 Result = SUCCESS
llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491682
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


> Remove IpcSerializationType
> ---
>
> Key: HADOOP-9630
> URL: https://issues.apache.org/jira/browse/HADOOP-9630
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Luke Lu
>Assignee: Junping Du
>  Labels: rpc
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9630.patch
>
>
> IpcSerializationType is assumed to be protobuf for the forseeable future. Not 
> to be confused with RpcKind which still supports different RpcEngines. Let's 
> remove the dead code, which can be confusing to maintain.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9581) hadoop --config non-existent directory should result in error

2013-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680361#comment-13680361
 ] 

Hudson commented on HADOOP-9581:


Integrated in Hadoop-Yarn-trunk #237 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/237/])
HADOOP-9581. hadoop --config non-existent directory should result in error. 
Contributed by Ashwin Shankar (Revision 1491548)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491548
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh


> hadoop --config non-existent directory should result in error 
> --
>
> Key: HADOOP-9581
> URL: https://issues.apache.org/jira/browse/HADOOP-9581
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
>Reporter: Ashwin Shankar
>Assignee: Ashwin Shankar
> Fix For: 2.1.0-beta, 0.23.9
>
> Attachments: HADOOP-9581.txt
>
>
> Courtesy : [~cwchung]
> {quote}Providing a non-existent config directory should result in error.
> $ hadoop dfs -ls /  : shows Hadoop DFS directory
> $ hadoop --config bad_config_dir dfs -ls : successful, showing Linux directory
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9635) Potential Stack Overflow in DomainSocket.c

2013-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680272#comment-13680272
 ] 

Hadoop QA commented on HADOOP-9635:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12587199/0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2631//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2631//console

This message is automatically generated.

> Potential Stack Overflow in DomainSocket.c
> --
>
> Key: HADOOP-9635
> URL: https://issues.apache.org/jira/browse/HADOOP-9635
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.3.0
> Environment: OSX 10.8
>Reporter: V. Karthik Kumar
>  Labels: patch, security
> Fix For: 2.3.0
>
> Attachments: 
> 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch
>
>
> When I was running on OSX, the DataNode was segfaulting. On investigation, it 
> was tracked down to this code. A potential stack overflow was also 
> identified. 
> {code}
>utfLength = (*env)->GetStringUTFLength(env, jstr);
>if (utfLength > sizeof(path)) {
>  jthr = newIOException(env, "path is too long!  We expected a path "
>  "no longer than %zd UTF-8 bytes.", sizeof(path));
>  goto done;
>}
>   // GetStringUTFRegion does not pad with NUL
>(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path);
> ...
>   //strtok_r can set rest pointer to NULL when no tokens found.
>   //Causes JVM to crash in rest[0]
>for (check[0] = '/', check[1] = '\0', rest = path, token = "";
>token && rest[0];
> token = strtok_r(rest, "/", &rest)) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira