[jira] [Commented] (HADOOP-9915) o.a.h.fs.Stat support on Macosx

2013-08-29 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753375#comment-13753375
 ] 

Binglin Chang commented on HADOOP-9915:
---

The test timeout is irrelevant, it happens occasionally. 
I have seen it also in 
[HADOOP-9897|https://issues.apache.org/jira/browse/HADOOP-9897?focusedCommentId=13749588page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13749588]

 o.a.h.fs.Stat support on Macosx
 ---

 Key: HADOOP-9915
 URL: https://issues.apache.org/jira/browse/HADOOP-9915
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial
 Attachments: HADOOP-9915.v1.patch


 Support macosx in o.a.h.fs.Stat.
 The stat cmd in macosx seems the same as stat in freebsd. I make mac to use 
 the same ExecString as freebsd, it seems to work fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9906) Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public

2013-08-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753398#comment-13753398
 ] 

Bikas Saha commented on HADOOP-9906:


Is the following backwards incompatible since we changed the method of a public 
class?
{code}
-  public static ListACL parseACLs(String aclString) {
+  public static ListACL parseACLs(String aclString) throws
+  BadAclFormatException {
{code}

 Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public
 

 Key: HADOOP-9906
 URL: https://issues.apache.org/jira/browse/HADOOP-9906
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: hadoop-9906-1.patch, hadoop-9906-2.patch


 HAZKUtil defines a couple of exceptions - BadAclFormatException and 
 BadAuthFormatException - that can be made public for use in other components. 
 For instance, YARN-353 could use it in tests and ACL validation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9916) TestIPC timeouts occasionally

2013-08-29 Thread Binglin Chang (JIRA)
Binglin Chang created HADOOP-9916:
-

 Summary: TestIPC timeouts occasionally
 Key: HADOOP-9916
 URL: https://issues.apache.org/jira/browse/HADOOP-9916
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor


TestIPC timeouts occasionally, for example: 
[https://issues.apache.org/jira/browse/HDFS-5130?focusedCommentId=13749870page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13749870]
[https://issues.apache.org/jira/browse/HADOOP-9915?focusedCommentId=13753302page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13753302]

Look into the code, have not found the cause yet, but we can do something first:

1. there are two method with same name:
   void testSerial()
   void testSerial(int handlerCount, boolean handlerSleep, ...)
   the second is not a test case, but somehow it causes testSerial(first one) 
run two times, see test report:
{code}
  testcase time=26.896 classname=org.apache.hadoop.ipc.TestIPC 
name=testSerial/
  testcase time=25.426 classname=org.apache.hadoop.ipc.TestIPC 
name=testSerial/
{code}

2. timeout annotation should be added, so next time related log is available.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9914) nodes overview should use FQDNs

2013-08-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753421#comment-13753421
 ] 

André Kelpe commented on HADOOP-9914:
-

I spent a day to get the /etc/hosts and /etc/avahi/hosts right, when I created 
the setup for hadoop 1.1.2 and that works fine. The namenode is actually still 
working correctly: When you browse the file system, it uses the fully qualified 
names of the datanodes, meaning everything works as I expect it. The 
resourcemanager does not have the same same behaviour.

Here are my hosts:

$ cat /etc/hosts
127.0.0.1   localhost
192.168.7.10  master.local  master 
192.168.7.11  backup.local  backup 
192.168.7.12  hadoop1.local hadoop1
192.168.7.13  hadoop2.local hadoop2
192.168.7.14  hadoop3.local hadoop3

$ cat /etc/avahi/hosts 
127.0.0.1   localhost
192.168.7.10  master.local  master 
192.168.7.11  backup.local  backup 
192.168.7.12  hadoop1.local hadoop1
192.168.7.13  hadoop2.local hadoop2
192.168.7.14  hadoop3.local hadoop3

The hostnames are also correct:

$ for host in master hadoop1 hadoop2 hadoop3; do vagrant ssh $host --command 
hostname ; done
master.local
hadoop1.local
hadoop2.local
hadoop3.local

and just to be sure:

$ for host in master hadoop1 hadoop2 hadoop3; do vagrant ssh $host --command 
'cat /etc/hostname' ; done
master.local
hadoop1.local
hadoop2.local
hadoop3.local

I also attached a screenshot of the webinterface.



 nodes overview should use FQDNs
 ---

 Key: HADOOP-9914
 URL: https://issues.apache.org/jira/browse/HADOOP-9914
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: André Kelpe

 I am running a hadoop cluster in a bunch of VMs on my local machine and I am 
 using avahi/zeroconf to do local name resolution (this is to avoid having to 
 fiddle with my /etc/hosts file). 
 The resourcemanager has an overview page, with links to all the nodemanager 
 web-interfaces. The links do not work with zeroconf, due to the fact that the 
 links are not including the domain part. zeroconf domains look like this 
 hadoop1.local, but the web-interface uses hadoop1, which will not resolve.
 In hadoop 1.x all web-interfaces were using FQDN, meaning using 
 avahi/zeroconf for name resolution was no problem. The same should be 
 possible in hadoop 2.x.
 I am still beginning to work with hadoop 2.x, so there might be other parts, 
 having the same problem, but I am not yet aware of any. If I find more of 
 these, I will update this bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9914) nodes overview should use FQDNs

2013-08-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753424#comment-13753424
 ] 

André Kelpe commented on HADOOP-9914:
-

Sorry, I hit submit, but did not intend to. I found the issue, that 
/etc/hostname was not exactly right. Funny enough, HDFS always worked fine. 
PEBKAC. Closing the issue.

 nodes overview should use FQDNs
 ---

 Key: HADOOP-9914
 URL: https://issues.apache.org/jira/browse/HADOOP-9914
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: André Kelpe

 I am running a hadoop cluster in a bunch of VMs on my local machine and I am 
 using avahi/zeroconf to do local name resolution (this is to avoid having to 
 fiddle with my /etc/hosts file). 
 The resourcemanager has an overview page, with links to all the nodemanager 
 web-interfaces. The links do not work with zeroconf, due to the fact that the 
 links are not including the domain part. zeroconf domains look like this 
 hadoop1.local, but the web-interface uses hadoop1, which will not resolve.
 In hadoop 1.x all web-interfaces were using FQDN, meaning using 
 avahi/zeroconf for name resolution was no problem. The same should be 
 possible in hadoop 2.x.
 I am still beginning to work with hadoop 2.x, so there might be other parts, 
 having the same problem, but I am not yet aware of any. If I find more of 
 these, I will update this bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9914) nodes overview should use FQDNs

2013-08-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Kelpe resolved HADOOP-9914.
-

Resolution: Invalid

 nodes overview should use FQDNs
 ---

 Key: HADOOP-9914
 URL: https://issues.apache.org/jira/browse/HADOOP-9914
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: André Kelpe

 I am running a hadoop cluster in a bunch of VMs on my local machine and I am 
 using avahi/zeroconf to do local name resolution (this is to avoid having to 
 fiddle with my /etc/hosts file). 
 The resourcemanager has an overview page, with links to all the nodemanager 
 web-interfaces. The links do not work with zeroconf, due to the fact that the 
 links are not including the domain part. zeroconf domains look like this 
 hadoop1.local, but the web-interface uses hadoop1, which will not resolve.
 In hadoop 1.x all web-interfaces were using FQDN, meaning using 
 avahi/zeroconf for name resolution was no problem. The same should be 
 possible in hadoop 2.x.
 I am still beginning to work with hadoop 2.x, so there might be other parts, 
 having the same problem, but I am not yet aware of any. If I find more of 
 these, I will update this bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9913) Document time unit to metrics

2013-08-29 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753473#comment-13753473
 ] 

Tsuyoshi OZAWA commented on HADOOP-9913:


+1, the policy to fix LGTM.
Two comment: 

1. Can you understand what Queue time stands for? If your answer is positive, 
it's OK. However, IMHO, Time in server side queue is more appropriate 
expression.
2. The patch against hadoop-common-project and hadoop-hdfs-project can be split.

 Document time unit to metrics
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.patch


 For example, in o.a.h.hdfs.server.namenode.metrics.NameNodeMetrics.java, 
 metrics are declared as follows:
 {code}
   @Metric(Duration in SafeMode at startup) MutableGaugeInt safeModeTime;
   @Metric(Time loading FS Image at startup) MutableGaugeInt fsImageLoadTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9910) proxy server start and stop documentation wrong

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753501#comment-13753501
 ] 

Hudson commented on HADOOP-9910:


SUCCESS: Integrated in Hadoop-Yarn-trunk #316 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/316/])
Addendum to HADOOP-9910 for trunk. Removed bad characters from CHANGES.txt note 
that was causing odd issues. (harsh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518302)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-9910. proxy server start and stop documentation wrong. Contributed by 
Andre Kelpe. (harsh) (harsh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518296)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm


 proxy server start and stop documentation wrong
 ---

 Key: HADOOP-9910
 URL: https://issues.apache.org/jira/browse/HADOOP-9910
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9910.patch


 I was trying to run a distributed cluster and found two little problems in 
 the documentation on how to start and stop the proxy server. Attached patch 
 fixes it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9894) Race condition in Shell leads to logged error stream handling exceptions

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753504#comment-13753504
 ] 

Hudson commented on HADOOP-9894:


SUCCESS: Integrated in Hadoop-Yarn-trunk #316 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/316/])
HADOOP-9894.  Race condition in Shell leads to logged error stream handling 
exceptions (Arpit Agarwal) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518420)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java


 Race condition in Shell leads to logged error stream handling exceptions
 

 Key: HADOOP-9894
 URL: https://issues.apache.org/jira/browse/HADOOP-9894
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.1.0-beta
Reporter: Jason Lowe
Assignee: Arpit Agarwal
 Attachments: hadoop-9894.01.patch


 Shell.runCommand starts an error stream handling thread and normally joins 
 with it before closing the error stream.  However if parseExecResult throws 
 an exception (e.g.: like Stat.parseExecResult does for FileNotFoundException) 
 then the error thread is not joined and the error stream can be closed before 
 the error stream handling thread is finished.  This causes the error stream 
 handling thread to log an exception backtrace for a normal situation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9906) Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753506#comment-13753506
 ] 

Hudson commented on HADOOP-9906:


SUCCESS: Integrated in Hadoop-Yarn-trunk #316 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/316/])
Adding and removing files missed for HADOOP-9906 (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518306)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAZKUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ZKUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestHAZKUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestZKUtil.java
HADOOP-9906. Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public 
(Karthik Kambatla via Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518303)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java


 Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public
 

 Key: HADOOP-9906
 URL: https://issues.apache.org/jira/browse/HADOOP-9906
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: hadoop-9906-1.patch, hadoop-9906-2.patch


 HAZKUtil defines a couple of exceptions - BadAclFormatException and 
 BadAuthFormatException - that can be made public for use in other components. 
 For instance, YARN-353 could use it in tests and ACL validation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9894) Race condition in Shell leads to logged error stream handling exceptions

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753614#comment-13753614
 ] 

Hudson commented on HADOOP-9894:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1506 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1506/])
HADOOP-9894.  Race condition in Shell leads to logged error stream handling 
exceptions (Arpit Agarwal) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518420)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java


 Race condition in Shell leads to logged error stream handling exceptions
 

 Key: HADOOP-9894
 URL: https://issues.apache.org/jira/browse/HADOOP-9894
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.1.0-beta
Reporter: Jason Lowe
Assignee: Arpit Agarwal
 Attachments: hadoop-9894.01.patch


 Shell.runCommand starts an error stream handling thread and normally joins 
 with it before closing the error stream.  However if parseExecResult throws 
 an exception (e.g.: like Stat.parseExecResult does for FileNotFoundException) 
 then the error thread is not joined and the error stream can be closed before 
 the error stream handling thread is finished.  This causes the error stream 
 handling thread to log an exception backtrace for a normal situation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9910) proxy server start and stop documentation wrong

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753611#comment-13753611
 ] 

Hudson commented on HADOOP-9910:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1506 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1506/])
Addendum to HADOOP-9910 for trunk. Removed bad characters from CHANGES.txt note 
that was causing odd issues. (harsh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518302)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-9910. proxy server start and stop documentation wrong. Contributed by 
Andre Kelpe. (harsh) (harsh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518296)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm


 proxy server start and stop documentation wrong
 ---

 Key: HADOOP-9910
 URL: https://issues.apache.org/jira/browse/HADOOP-9910
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9910.patch


 I was trying to run a distributed cluster and found two little problems in 
 the documentation on how to start and stop the proxy server. Attached patch 
 fixes it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9906) Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753616#comment-13753616
 ] 

Hudson commented on HADOOP-9906:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1506 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1506/])
Adding and removing files missed for HADOOP-9906 (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518306)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAZKUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ZKUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestHAZKUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestZKUtil.java
HADOOP-9906. Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public 
(Karthik Kambatla via Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518303)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java


 Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public
 

 Key: HADOOP-9906
 URL: https://issues.apache.org/jira/browse/HADOOP-9906
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: hadoop-9906-1.patch, hadoop-9906-2.patch


 HAZKUtil defines a couple of exceptions - BadAclFormatException and 
 BadAuthFormatException - that can be made public for use in other components. 
 For instance, YARN-353 could use it in tests and ACL validation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-08-29 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753620#comment-13753620
 ] 

Daryn Sharp commented on HADOOP-9912:
-

bq. The intended behavior of Globber.glob (which calls listStatus) is to return 
symlink rather than symlink target I believe

bq. I guess for a long time, pig is using this behavior(listStatus return 
symlink target rather than symlink), I am afraid this behavior is wrong and is 
inconsistent with HDFS. 

Wrong. Wrong. Wrong.  {{listStatus}} resolves symlinks.  {{globStatus}} is 
supposed to be equivalent to {{listStatus}} with wildcard support.  All 
existing code depends on these semantics, and rightly so.  Symlinks should be 
transparent to users unless they specifically want to know if a path is a 
symlink.  That's why there is a counterpart to {{getFileStatus}} called 
{{getFileLinkStatus}} which does not resolve symlinks.

HADOOP-9877 fundamentally broke the semantics of {{globStatus}} based on 
whether the last path component is a glob or static.  The result is:
* /path/symlink - the static component symlink results in a file status of 
the symlink, breaking isFile/isDir/etc
* /path/sym*link - the glob component symlink returns the file status of the 
resolved link, working as expected

{{globStatus}} _must_ consistently return resolved paths.  The semantics 
altered by HADOOP-9877 will break lots of code.  I'm pretty sure that includes 
{{FsShell}}.  We cannot break lot standing semantics just for snapshots.

Why does .snapshot support require a {{getFileLinkStatus}}?  Does 
{{getFileStatus}} not work for a .snapshot directory?

 globStatus of a symlink to a directory does not report symlink as a directory
 -

 Key: HADOOP-9912
 URL: https://issues.apache.org/jira/browse/HADOOP-9912
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Blocker
 Attachments: HADOOP-9912-testcase.patch


 globStatus for a path that is a symlink to a directory used to report the 
 resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-08-29 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753666#comment-13753666
 ] 

Binglin Chang commented on HADOOP-9912:
---

@Daryn I am confused, I originally use getFileStatus, later changed, please see 
[this 
comment|https://issues.apache.org/jira/browse/HADOOP-9877?focusedCommentId=13741497page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13741497]

 globStatus of a symlink to a directory does not report symlink as a directory
 -

 Key: HADOOP-9912
 URL: https://issues.apache.org/jira/browse/HADOOP-9912
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Blocker
 Attachments: HADOOP-9912-testcase.patch


 globStatus for a path that is a symlink to a directory used to report the 
 resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread JIRA
André Kelpe created HADOOP-9917:
---

 Summary: cryptic warning when killing a job running with yarn
 Key: HADOOP-9917
 URL: https://issues.apache.org/jira/browse/HADOOP-9917
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Priority: Minor


When I am killing a job like this

hadoop job -kill jobid

I am getting a cryptic warning, which I don't really understand:

DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it.

I fail parsing this and I believe many others will do too. Please make this 
warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9894) Race condition in Shell leads to logged error stream handling exceptions

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753674#comment-13753674
 ] 

Hudson commented on HADOOP-9894:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1533 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1533/])
HADOOP-9894.  Race condition in Shell leads to logged error stream handling 
exceptions (Arpit Agarwal) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518420)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java


 Race condition in Shell leads to logged error stream handling exceptions
 

 Key: HADOOP-9894
 URL: https://issues.apache.org/jira/browse/HADOOP-9894
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.1.0-beta
Reporter: Jason Lowe
Assignee: Arpit Agarwal
 Attachments: hadoop-9894.01.patch


 Shell.runCommand starts an error stream handling thread and normally joins 
 with it before closing the error stream.  However if parseExecResult throws 
 an exception (e.g.: like Stat.parseExecResult does for FileNotFoundException) 
 then the error thread is not joined and the error stream can be closed before 
 the error stream handling thread is finished.  This causes the error stream 
 handling thread to log an exception backtrace for a normal situation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9910) proxy server start and stop documentation wrong

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753671#comment-13753671
 ] 

Hudson commented on HADOOP-9910:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1533 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1533/])
Addendum to HADOOP-9910 for trunk. Removed bad characters from CHANGES.txt note 
that was causing odd issues. (harsh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518302)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-9910. proxy server start and stop documentation wrong. Contributed by 
Andre Kelpe. (harsh) (harsh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518296)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm


 proxy server start and stop documentation wrong
 ---

 Key: HADOOP-9910
 URL: https://issues.apache.org/jira/browse/HADOOP-9910
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9910.patch


 I was trying to run a distributed cluster and found two little problems in 
 the documentation on how to start and stop the proxy server. Attached patch 
 fixes it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9906) Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753676#comment-13753676
 ] 

Hudson commented on HADOOP-9906:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1533 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1533/])
Adding and removing files missed for HADOOP-9906 (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518306)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAZKUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ZKUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestHAZKUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestZKUtil.java
HADOOP-9906. Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public 
(Karthik Kambatla via Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518303)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java


 Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public
 

 Key: HADOOP-9906
 URL: https://issues.apache.org/jira/browse/HADOOP-9906
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: hadoop-9906-1.patch, hadoop-9906-2.patch


 HAZKUtil defines a couple of exceptions - BadAclFormatException and 
 BadAuthFormatException - that can be made public for use in other components. 
 For instance, YARN-353 could use it in tests and ACL validation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-08-29 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753679#comment-13753679
 ] 

Binglin Chang commented on HADOOP-9912:
---

This issue is not related to .snapshot support, this issue is caused by add 
symlink support to HDFS and LocalFileSystem but not handle consistency well.

 globStatus of a symlink to a directory does not report symlink as a directory
 -

 Key: HADOOP-9912
 URL: https://issues.apache.org/jira/browse/HADOOP-9912
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Blocker
 Attachments: HADOOP-9912-testcase.patch


 globStatus for a path that is a symlink to a directory used to report the 
 resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9906) Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public

2013-08-29 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753706#comment-13753706
 ] 

Sandy Ryza commented on HADOOP-9906:


HAZKUtil was marked @InterfaceAudience.Private

 Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public
 

 Key: HADOOP-9906
 URL: https://issues.apache.org/jira/browse/HADOOP-9906
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: hadoop-9906-1.patch, hadoop-9906-2.patch


 HAZKUtil defines a couple of exceptions - BadAclFormatException and 
 BadAuthFormatException - that can be made public for use in other components. 
 For instance, YARN-353 could use it in tests and ACL validation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-08-29 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753707#comment-13753707
 ] 

Binglin Chang commented on HADOOP-9912:
---

Just checked again: 
In LocalFileSystem listStatus resolves symlinks.
In HDFS listStatus does not resolve symlinks.
I did find this conflict when I was doing HADOOP-9877, and followed HDFS 
convention and uses getFileLinkStatus.

 globStatus of a symlink to a directory does not report symlink as a directory
 -

 Key: HADOOP-9912
 URL: https://issues.apache.org/jira/browse/HADOOP-9912
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Blocker
 Attachments: HADOOP-9912-testcase.patch


 globStatus for a path that is a symlink to a directory used to report the 
 resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753747#comment-13753747
 ] 

Roman Shaposhnik commented on HADOOP-9917:
--

Would you be able to suggest a better wording?

 cryptic warning when killing a job running with yarn
 

 Key: HADOOP-9917
 URL: https://issues.apache.org/jira/browse/HADOOP-9917
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Priority: Minor

 When I am killing a job like this
 hadoop job -kill jobid
 I am getting a cryptic warning, which I don't really understand:
 DEPRECATED: Use of this script to execute mapred command is deprecated.
 Instead use the mapred command for it.
 I fail parsing this and I believe many others will do too. Please make this 
 warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753756#comment-13753756
 ] 

André Kelpe commented on HADOOP-9917:
-

no, because I don't understand, what it means.

 cryptic warning when killing a job running with yarn
 

 Key: HADOOP-9917
 URL: https://issues.apache.org/jira/browse/HADOOP-9917
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Priority: Minor

 When I am killing a job like this
 hadoop job -kill jobid
 I am getting a cryptic warning, which I don't really understand:
 DEPRECATED: Use of this script to execute mapred command is deprecated.
 Instead use the mapred command for it.
 I fail parsing this and I believe many others will do too. Please make this 
 warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753759#comment-13753759
 ] 

Roman Shaposhnik commented on HADOOP-9917:
--

It simply means that users are encouraged to start using 'mapred' command for 
manipulating their MR jobs instead of a top level 'hadoop' command. Perhaps 
just adding quotes (like I did in a preceding sentence) around 'mapred' to make 
it standout as a name of a command to use would do? Or mapred(1)?

 cryptic warning when killing a job running with yarn
 

 Key: HADOOP-9917
 URL: https://issues.apache.org/jira/browse/HADOOP-9917
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Priority: Minor

 When I am killing a job like this
 hadoop job -kill jobid
 I am getting a cryptic warning, which I don't really understand:
 DEPRECATED: Use of this script to execute mapred command is deprecated.
 Instead use the mapred command for it.
 I fail parsing this and I believe many others will do too. Please make this 
 warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753762#comment-13753762
 ] 

André Kelpe commented on HADOOP-9917:
-

now I get it!

Maybe use something like Please use the 'mapred' command for interacting with 
jobs instead of 'hadoop' command

Side question: What is the rationale for this cli change? I'd like to read more 
about that. Thanks!

 cryptic warning when killing a job running with yarn
 

 Key: HADOOP-9917
 URL: https://issues.apache.org/jira/browse/HADOOP-9917
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Priority: Minor

 When I am killing a job like this
 hadoop job -kill jobid
 I am getting a cryptic warning, which I don't really understand:
 DEPRECATED: Use of this script to execute mapred command is deprecated.
 Instead use the mapred command for it.
 I fail parsing this and I believe many others will do too. Please make this 
 warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9918) Add addIfService() to CompositeService

2013-08-29 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-9918:


 Summary: Add addIfService() to CompositeService
 Key: HADOOP-9918
 URL: https://issues.apache.org/jira/browse/HADOOP-9918
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


Some YARN and MR classes implement their own version of {{addIfService(Object 
object)}} that adds the service to CompositeService if the object is a service. 

It makes more sense to move this helper to CompositeService itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9830) Typo at http://hadoop.apache.org/docs/current/

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9830:
--

Component/s: documentation

 Typo at http://hadoop.apache.org/docs/current/
 --

 Key: HADOOP-9830
 URL: https://issues.apache.org/jira/browse/HADOOP-9830
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Dmitry Lysnichenko
Assignee: Kousuke Saruta
Priority: Trivial
 Attachments: HADOOP-9830.patch


 Strange symbols at http://hadoop.apache.org/docs/current/
 {code} 
 ApplicationMaster manages the application’s scheduling and coordination. 
 {code}
 Sorry for posting here, could not find any other way to report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9830) Typo at http://hadoop.apache.org/docs/current/

2013-08-29 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753922#comment-13753922
 ] 

Akira AJISAKA commented on HADOOP-9830:
---

+1

 Typo at http://hadoop.apache.org/docs/current/
 --

 Key: HADOOP-9830
 URL: https://issues.apache.org/jira/browse/HADOOP-9830
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Dmitry Lysnichenko
Assignee: Kousuke Saruta
Priority: Trivial
 Attachments: HADOOP-9830.patch


 Strange symbols at http://hadoop.apache.org/docs/current/
 {code} 
 ApplicationMaster manages the application’s scheduling and coordination. 
 {code}
 Sorry for posting here, could not find any other way to report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9830) Typo at http://hadoop.apache.org/docs/current/

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9830:
--

Affects Version/s: 2.1.0-beta
   0.23.9
   2.0.6-alpha

 Typo at http://hadoop.apache.org/docs/current/
 --

 Key: HADOOP-9830
 URL: https://issues.apache.org/jira/browse/HADOOP-9830
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.1.0-beta, 0.23.9, 2.0.6-alpha
Reporter: Dmitry Lysnichenko
Assignee: Kousuke Saruta
Priority: Trivial
 Attachments: HADOOP-9830.patch


 Strange symbols at http://hadoop.apache.org/docs/current/
 {code} 
 ApplicationMaster manages the application’s scheduling and coordination. 
 {code}
 Sorry for posting here, could not find any other way to report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9918) Add addIfService() to CompositeService

2013-08-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9918:
-

Attachment: hadoop-9918-1.patch

Straight-forward patch.

 Add addIfService() to CompositeService
 --

 Key: HADOOP-9918
 URL: https://issues.apache.org/jira/browse/HADOOP-9918
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
  Labels: service
 Attachments: hadoop-9918-1.patch


 Some YARN and MR classes implement their own version of {{addIfService(Object 
 object)}} that adds the service to CompositeService if the object is a 
 service. 
 It makes more sense to move this helper to CompositeService itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9918) Add addIfService() to CompositeService

2013-08-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9918:
-

Status: Patch Available  (was: Open)

 Add addIfService() to CompositeService
 --

 Key: HADOOP-9918
 URL: https://issues.apache.org/jira/browse/HADOOP-9918
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
  Labels: service
 Attachments: hadoop-9918-1.patch


 Some YARN and MR classes implement their own version of {{addIfService(Object 
 object)}} that adds the service to CompositeService if the object is a 
 service. 
 It makes more sense to move this helper to CompositeService itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9918) Add addIfService() to CompositeService

2013-08-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9918:
-

Priority: Minor  (was: Major)

 Add addIfService() to CompositeService
 --

 Key: HADOOP-9918
 URL: https://issues.apache.org/jira/browse/HADOOP-9918
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
  Labels: service
 Attachments: hadoop-9918-1.patch


 Some YARN and MR classes implement their own version of {{addIfService(Object 
 object)}} that adds the service to CompositeService if the object is a 
 service. 
 It makes more sense to move this helper to CompositeService itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9918) Add addIfService() to CompositeService

2013-08-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9918:
-

Status: Open  (was: Patch Available)

 Add addIfService() to CompositeService
 --

 Key: HADOOP-9918
 URL: https://issues.apache.org/jira/browse/HADOOP-9918
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
  Labels: service
 Attachments: hadoop-9918-1.patch


 Some YARN and MR classes implement their own version of {{addIfService(Object 
 object)}} that adds the service to CompositeService if the object is a 
 service. 
 It makes more sense to move this helper to CompositeService itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9918) Add addIfService() to CompositeService

2013-08-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9918:
-

Status: Patch Available  (was: Open)

 Add addIfService() to CompositeService
 --

 Key: HADOOP-9918
 URL: https://issues.apache.org/jira/browse/HADOOP-9918
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
  Labels: service
 Attachments: hadoop-9918-1.patch, hadoop-9918-2.patch


 Some YARN and MR classes implement their own version of {{addIfService(Object 
 object)}} that adds the service to CompositeService if the object is a 
 service. 
 It makes more sense to move this helper to CompositeService itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9918) Add addIfService() to CompositeService

2013-08-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9918:
-

Attachment: hadoop-9918-2.patch

Forgot to remove unused imports. Updating the patch.

 Add addIfService() to CompositeService
 --

 Key: HADOOP-9918
 URL: https://issues.apache.org/jira/browse/HADOOP-9918
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
  Labels: service
 Attachments: hadoop-9918-1.patch, hadoop-9918-2.patch


 Some YARN and MR classes implement their own version of {{addIfService(Object 
 object)}} that adds the service to CompositeService if the object is a 
 service. 
 It makes more sense to move this helper to CompositeService itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HADOOP-9909:
---

Attachment: HADOOP-9909.patch

I attach a patch which I added the test code of environment variable. 

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
Priority: Minor
 Attachments: HADOOP-9909.patch, HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2013-08-29 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-9919:
-

 Summary: Rewrite hadoop-metrics2.properties
 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA


The config for JobTracker and TaskTracker (comment outed) still exists in 
hadoop-metrics2.properties as follows:

{code}
#jobtracker.sink.file_jvm.context=jvm
#jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
#jobtracker.sink.file_mapred.context=mapred
#jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out

#tasktracker.sink.file.filename=tasktracker-metrics.out
{code}

These lines should be removed and a config for NodeManager should be added 
instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9918) Add addIfService() to CompositeService

2013-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753964#comment-13753964
 ] 

Hadoop QA commented on HADOOP-9918:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600630/hadoop-9918-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3034//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3034//console

This message is automatically generated.

 Add addIfService() to CompositeService
 --

 Key: HADOOP-9918
 URL: https://issues.apache.org/jira/browse/HADOOP-9918
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
  Labels: service
 Attachments: hadoop-9918-1.patch, hadoop-9918-2.patch


 Some YARN and MR classes implement their own version of {{addIfService(Object 
 object)}} that adds the service to CompositeService if the object is a 
 service. 
 It makes more sense to move this helper to CompositeService itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9918) Add addIfService() to CompositeService

2013-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753982#comment-13753982
 ] 

Hadoop QA commented on HADOOP-9918:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600632/hadoop-9918-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3035//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3035//console

This message is automatically generated.

 Add addIfService() to CompositeService
 --

 Key: HADOOP-9918
 URL: https://issues.apache.org/jira/browse/HADOOP-9918
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
  Labels: service
 Attachments: hadoop-9918-1.patch, hadoop-9918-2.patch


 Some YARN and MR classes implement their own version of {{addIfService(Object 
 object)}} that adds the service to CompositeService if the object is a 
 service. 
 It makes more sense to move this helper to CompositeService itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-08-29 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754053#comment-13754053
 ] 

Jason Lowe commented on HADOOP-9912:


bq. This issue is not related to .snapshot support, this issue is caused by add 
symlink support to HDFS and LocalFileSystem but not handle consistency well.

If this has nothing to do with snapshot support, then why did the behavior of 
globStatus and symlinks change with HADOOP-9877 which appears to be a 
snapshot-related JIRA?

listStatus needs to follow symlinks, even in the HDFS case, otherwise symlinks 
are not very useful.  If symlinks never auto-resolve, then every client will 
have to be symlink-aware and manually resolve the link for the symlink feature 
to be useful in practice.

 globStatus of a symlink to a directory does not report symlink as a directory
 -

 Key: HADOOP-9912
 URL: https://issues.apache.org/jira/browse/HADOOP-9912
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Blocker
 Attachments: HADOOP-9912-testcase.patch


 globStatus for a path that is a symlink to a directory used to report the 
 resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9918) Add addIfService() to CompositeService

2013-08-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754072#comment-13754072
 ] 

Steve Loughran commented on HADOOP-9918:


seems good, though some javadocs on the method would be nice

 Add addIfService() to CompositeService
 --

 Key: HADOOP-9918
 URL: https://issues.apache.org/jira/browse/HADOOP-9918
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
  Labels: service
 Attachments: hadoop-9918-1.patch, hadoop-9918-2.patch


 Some YARN and MR classes implement their own version of {{addIfService(Object 
 object)}} that adds the service to CompositeService if the object is a 
 service. 
 It makes more sense to move this helper to CompositeService itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-08-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754115#comment-13754115
 ] 

Andrew Wang commented on HADOOP-9912:
-

Let's be constructive and figure out the right fix. Jason, thanks for the 
attached test case, that helped me understand the issue.

bq. listStatus resolves symlinks. globStatus is supposed to be equivalent to 
listStatus with wildcard support...Symlinks should be transparent to users 
unless they specifically want to know if a path is a symlink.

In HDFS, {{listStatus}} only transparently resolves symlinks in the input path. 
It doesn't resolve the results of the listing, and this is the correct 
behavior. {{globStatus}} behaves the same way, in that it returns FileStatuses 
for Paths that match the glob, and it doesn't resolve these results. You can 
(and should) see symlinks returned by listStatus and globStatus in HDFS.

I also wouldn't say {{globStatus}} is equivalent to {{listStatus}}, since it 
doesn't list directories. If you want listStatus with matching, you can use 
{{listStatus(Path, PathFilter)}}.

In RLFS there is automatic symlink resolution, so {{listStatus}} results are 
resolved, and it seems like Pig depends on this behavior. Because of 
HADOOP-9877), {{globStatus}} went from always calling {{listStatus}} to calling 
{{getFileLinkStatus}} for non-wildcard glob components. Thus, when passed a 
{{Path}} that's a symlink, {{globStatus}} says it's a symlink.

bq. Why does .snapshot support require a getFileLinkStatus? Does getFileStatus 
not work for a .snapshot directory?

It does work, but it's incorrect. globStatus is not supposed to return resolved 
statuses. It's unfortunate that RLFS has been auto-resolving all this time, but 
since apps apparently depend on it, all we can do is embrace it.

How about this: we add a fixup step that, for symlink results on a 
LocalFileSystem, resolves them (but still keeping the link path). This means no 
more symlinks in RLFS {{globStatus}} results. It's a bit obnoxious to do 
(globStatus could symlink through HDFS to a link on a local filesystem), but it 
seems like a reasonable solution.

 globStatus of a symlink to a directory does not report symlink as a directory
 -

 Key: HADOOP-9912
 URL: https://issues.apache.org/jira/browse/HADOOP-9912
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Blocker
 Attachments: HADOOP-9912-testcase.patch


 globStatus for a path that is a symlink to a directory used to report the 
 resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754125#comment-13754125
 ] 

Akira AJISAKA commented on HADOOP-9909:
---

I reproduced this issue by setting LANG=ja_JP.UTF-8 and I also found this issue 
causes mapreduce job fails.
After I set LANG=C and rebooted hadoop daemons, sample pi job succeeded.

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
Priority: Minor
 Attachments: HADOOP-9909.patch, HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9909:
--

Attachment: console.log

Console log of sample pi job (LANG=jp_JP.UTF-8)

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
Priority: Minor
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9909:
--

Priority: Major  (was: Minor)

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-08-29 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754132#comment-13754132
 ] 

Daryn Sharp commented on HADOOP-9912:
-

I need to look at this further but if {{DFS.listStatus}} isn't resolving, then 
we've got to think hard about the semantics of symlinks.  99% of the time, the 
user expects a symlink to be transparent.

Lots of code uses {{listStatus}} or {{globStatus}} and expects to perform 
file/dir checks.  Now that code will be required to check if the path is a 
symlink, if yes, re-stat.  This will greatly inhibit the use of symlinks which 
is why I think a new api is required.  

Either way we go, we can't have the inconsistency I cited for how globbing is 
now returning different results based on whether the symlink was matched by a 
static or globbed path component.  It must always be a resolved status or an 
unresolved status.

 globStatus of a symlink to a directory does not report symlink as a directory
 -

 Key: HADOOP-9912
 URL: https://issues.apache.org/jira/browse/HADOOP-9912
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Blocker
 Attachments: HADOOP-9912-testcase.patch


 globStatus for a path that is a symlink to a directory used to report the 
 resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754134#comment-13754134
 ] 

Hadoop QA commented on HADOOP-9909:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600661/console.log
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3037//console

This message is automatically generated.

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9774) RawLocalFileSystem.listStatus() return absolute paths when input path is relative on Windows

2013-08-29 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754137#comment-13754137
 ] 

Ivan Mitic commented on HADOOP-9774:


Thanks Shanyu for the confirmation! Will commit the patch shortly. 

 RawLocalFileSystem.listStatus() return absolute paths when input path is 
 relative on Windows
 

 Key: HADOOP-9774
 URL: https://issues.apache.org/jira/browse/HADOOP-9774
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HADOOP-9774-2.patch, HADOOP-9774-3.patch, 
 HADOOP-9774-4.patch, HADOOP-9774-5.patch, HADOOP-9774.patch


 On Windows, when using RawLocalFileSystem.listStatus() to enumerate a 
 relative path (without drive spec), e.g., file:///mydata, the resulting 
 paths become absolute paths, e.g., [file://E:/mydata/t1.txt, 
 file://E:/mydata/t2.txt...].
 Note that if we use it to enumerate an absolute path, e.g., 
 file://E:/mydata then the we get the same results as above.
 This breaks some hive unit tests which uses local file system to simulate 
 HDFS when testing, therefore the drive spec is removed. Then after 
 listStatus() the path is changed to absolute path, hive failed to find the 
 path in its map reduce job.
 You'll see the following exception:
 [junit] java.io.IOException: cannot find dir = 
 pfile:/E:/GitHub/hive-monarch/build/ql/test/data/warehouse/src/kv1.txt in 
 pathToPartitionInfo: 
 [pfile:/GitHub/hive-monarch/build/ql/test/data/warehouse/src]
 [junit]   at 
 org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:298)
 This problem is introduced by this JIRA:
 HADOOP-8962
 Prior to the fix for HADOOP-8962 (merged in 0.23.5), the resulting paths are 
 relative paths if the parent paths are relative, e.g., 
 [file:///mydata/t1.txt, file:///mydata/t2.txt...]
 This behavior change is a side effect of the fix in HADOOP-8962, not an 
 intended change. The resulting behavior, even though is legitimate from a 
 function point of view, break consistency from the caller's point of view. 
 When the caller use a relative path (without drive spec) to do listStatus() 
 the resulting path should be relative. Therefore, I think this should be 
 fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-08-29 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754139#comment-13754139
 ] 

Jason Lowe commented on HADOOP-9912:


bq. In HDFS, listStatus only transparently resolves symlinks in the input path. 
It doesn't resolve the results of the listing, and this is the correct behavior.

Isn't that going to break clients who are not symlink-aware?  That means we 
can't have a tree of files with a symlink to a directory in it.  A 
symlink-unaware tree walker client will not realize that the symlink is 
actually pointing to a directory and should be traversed since the file status 
will say it's not a directory. That's what's happening with Pig now.  Aren't 
there separate calls if one wants to know the true details of a link rather 
than what the link references?

 globStatus of a symlink to a directory does not report symlink as a directory
 -

 Key: HADOOP-9912
 URL: https://issues.apache.org/jira/browse/HADOOP-9912
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Blocker
 Attachments: HADOOP-9912-testcase.patch


 globStatus for a path that is a symlink to a directory used to report the 
 resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HADOOP-9909:
---

Attachment: HADOOP-9909.patch

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754143#comment-13754143
 ] 

Andrew Wang commented on HADOOP-9909:
-

Looks pretty good, just a few comments:

{code}
+  /** get the environment variable
+   * @return the environment variable of the command
+   */
{code}

Our Javadoc style looks more like this, with another newline and 
capitalization. It's also okay to skip the {{@return}} if it's not adding any 
information.

{code}
+  /**
+   * Get the environment variable
+   */
{code}

I'd also rather {{Map getEnvironment()}} be {{String getEnvironment(String)}}, 
so we're not exposing the entire map (which can be modified).

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more 

[jira] [Updated] (HADOOP-9889) Refresh the Krb5 configuration when creating a new kdc in Hadoop-MiniKDC

2013-08-29 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated HADOOP-9889:
---

   Resolution: Fixed
Fix Version/s: 2.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this.  Thanks Wei!

 Refresh the Krb5 configuration when creating a new kdc in Hadoop-MiniKDC
 

 Key: HADOOP-9889
 URL: https://issues.apache.org/jira/browse/HADOOP-9889
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
 Fix For: 2.3.0

 Attachments: HADOOP-9889.patch, HAOOP-9889.patch, HAOOP-9889.patch


 Krb5 Config uses a singleton and once initialized it does not refresh 
 automatically. Without refresh, there are failures if you are using MiniKDCs 
 with different configurations (such as different realms) within the same test 
 run or if the Krb5 Config singleton is called before the MiniKDC is started 
 for the first time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HADOOP-9909:
---

Status: Open  (was: Patch Available)

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9889) Refresh the Krb5 configuration when creating a new kdc in Hadoop-MiniKDC

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754154#comment-13754154
 ] 

Hudson commented on HADOOP-9889:


SUCCESS: Integrated in Hadoop-trunk-Commit #4346 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4346/])
HADOOP-9889. Refresh the Krb5 configuration when creating a new kdc in 
Hadoop-MiniKDC (Wei Yan via Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518847)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc/TestMiniKdc.java


 Refresh the Krb5 configuration when creating a new kdc in Hadoop-MiniKDC
 

 Key: HADOOP-9889
 URL: https://issues.apache.org/jira/browse/HADOOP-9889
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
 Fix For: 2.3.0

 Attachments: HADOOP-9889.patch, HAOOP-9889.patch, HAOOP-9889.patch


 Krb5 Config uses a singleton and once initialized it does not refresh 
 automatically. Without refresh, there are failures if you are using MiniKDCs 
 with different configurations (such as different realms) within the same test 
 run or if the Krb5 Config singleton is called before the MiniKDC is started 
 for the first time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754171#comment-13754171
 ] 

Hadoop QA commented on HADOOP-9909:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600665/HADOOP-9909.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3038//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3038//console

This message is automatically generated.

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at 

[jira] [Updated] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9919:
--

Attachment: HADOOP-9919.patch

 Rewrite hadoop-metrics2.properties
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
 Attachments: HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-08-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754180#comment-13754180
 ] 

Andrew Wang commented on HADOOP-9912:
-

Daryn, Jason, thanks for the input:

bq. I need to look at this further but if DFS.listStatus isn't resolving, then 
we've got to think hard about the semantics of symlinks. 99% of the time, the 
user expects a symlink to be transparent.
bq. Aren't there separate calls if one wants to know the true details of a link 
rather than what the link references?

If you look at {{readdir}} as an example, it does not automatically dereference 
by default. Neither does {{ls}}, unless you use the {{-L}} flag on Linux. I 
think that's the expected default behavior, showing the actual contents of the 
directory. It's possible to build a directory walking program via the current 
{{listStatus}}, it just requires dereferencing any links to see if the target 
is a directory. This appears to be what {{ls -R}} does.

I think my proposal to fix RLFS still makes sense (let RLFS be inconsistent and 
compatible), and then we can think about adding a {{ls -L}} style convenience 
flag or a new call for auto-deref of listing and glob results.

 globStatus of a symlink to a directory does not report symlink as a directory
 -

 Key: HADOOP-9912
 URL: https://issues.apache.org/jira/browse/HADOOP-9912
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Blocker
 Attachments: HADOOP-9912-testcase.patch


 globStatus for a path that is a symlink to a directory used to report the 
 resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2013-08-29 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754185#comment-13754185
 ] 

Akira AJISAKA commented on HADOOP-9919:
---

I upload a patch to

# remove jobtracker, tasktracker, maptask, and reducetask settings
# add resoucemanager, nodemanager, and mrappmaster settings
# set output path to full-path
# add class settings for splitting metrics examples

 Rewrite hadoop-metrics2.properties
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
 Attachments: HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9919:
--

Assignee: Akira AJISAKA

 Rewrite hadoop-metrics2.properties
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9919:
--

Target Version/s: 3.0.0, 2.3.0
Release Note: Remove MRv1 settings from hadoop-metrics2.properties, add 
YARN settings instead.
  Status: Patch Available  (was: Open)

 Rewrite hadoop-metrics2.properties
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HADOOP-9909:
---

Attachment: HADOOP-9909.patch

Thank you for comment. I attach a patch which reflected comment.

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch, HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HADOOP-9909:
---

Status: Patch Available  (was: Open)

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch, HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9913) Document time unit to metrics

2013-08-29 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754199#comment-13754199
 ] 

Akira AJISAKA commented on HADOOP-9913:
---

[~ozawa], thanks for your comment.

{quote}
Can you understand what Queue time stands for? If your answer is positive, 
it's OK. However, IMHO, Time in server side queue is more appropriate 
expression.
{quote}

I understand. I'll fix the patch to make more appropriate expression.

{quote}
The patch against hadoop-common-project and hadoop-hdfs-project can be split.
{quote}

Thanks. I'll split the patch and make another issue for NameNodeMetrics on HDFS 
project.

 Document time unit to metrics
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.patch


 For example, in o.a.h.hdfs.server.namenode.metrics.NameNodeMetrics.java, 
 metrics are declared as follows:
 {code}
   @Metric(Duration in SafeMode at startup) MutableGaugeInt safeModeTime;
   @Metric(Time loading FS Image at startup) MutableGaugeInt fsImageLoadTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754200#comment-13754200
 ] 

Andrew Wang commented on HADOOP-9909:
-

+1 pending Jenkins. [~ajisakaa] do you mind testing with this patch to make 
sure it fixes your issue? The test case doesn't set an incorrect LANG first, so 
it doesn't give positive confirmation.

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch, HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754232#comment-13754232
 ] 

Akira AJISAKA commented on HADOOP-9909:
---

+1, I applied the patch and ran sample pi job successful with 
LANG=jp_JP.UTF-8.

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch, HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754240#comment-13754240
 ] 

Hadoop QA commented on HADOOP-9909:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600685/HADOOP-9909.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3039//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3039//console

This message is automatically generated.

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch, HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at 

[jira] [Updated] (HADOOP-9913) Document time unit to metrics

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9913:
--

Status: Open  (was: Patch Available)

 Document time unit to metrics
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 2.1.0-beta, 3.0.0
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.patch


 For example, in o.a.h.hdfs.server.namenode.metrics.NameNodeMetrics.java, 
 metrics are declared as follows:
 {code}
   @Metric(Duration in SafeMode at startup) MutableGaugeInt safeModeTime;
   @Metric(Time loading FS Image at startup) MutableGaugeInt fsImageLoadTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2013-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754241#comment-13754241
 ] 

Hadoop QA commented on HADOOP-9919:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600679/HADOOP-9919.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3040//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3040//console

This message is automatically generated.

 Rewrite hadoop-metrics2.properties
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9913) Document time unit to RpcMetrics

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9913:
--

Summary: Document time unit to RpcMetrics  (was: Document time unit to 
metrics)

 Document time unit to RpcMetrics
 

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.patch


 For example, in o.a.h.hdfs.server.namenode.metrics.NameNodeMetrics.java, 
 metrics are declared as follows:
 {code}
   @Metric(Duration in SafeMode at startup) MutableGaugeInt safeModeTime;
   @Metric(Time loading FS Image at startup) MutableGaugeInt fsImageLoadTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9913) Document time unit to metrics

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9913:
--

Attachment: HADOOP-9913.2.patch

 Document time unit to metrics
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.patch


 For example, in o.a.h.hdfs.server.namenode.metrics.NameNodeMetrics.java, 
 metrics are declared as follows:
 {code}
   @Metric(Duration in SafeMode at startup) MutableGaugeInt safeModeTime;
   @Metric(Time loading FS Image at startup) MutableGaugeInt fsImageLoadTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9913) Document time unit to RpcMetrics.java

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9913:
--

Summary: Document time unit to RpcMetrics.java  (was: Document time unit to 
RpcMetrics)

 Document time unit to RpcMetrics.java
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.patch


 For example, in o.a.h.hdfs.server.namenode.metrics.NameNodeMetrics.java, 
 metrics are declared as follows:
 {code}
   @Metric(Duration in SafeMode at startup) MutableGaugeInt safeModeTime;
   @Metric(Time loading FS Image at startup) MutableGaugeInt fsImageLoadTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9913) Document time unit to RpcMetrics.java

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9913:
--

Description: 
In o.a.h.hdfs.server.namenode.metrics.NameNodeMetrics.java, metrics are 
declared as follows:

{code}
  @Metric(Duration in SafeMode at startup) MutableGaugeInt safeModeTime;
  @Metric(Time loading FS Image at startup) MutableGaugeInt fsImageLoadTime;
{code}

Since some users may confuse which unit (sec or msec) is correct, they should 
be documented.

  was:
For example, in o.a.h.hdfs.server.namenode.metrics.NameNodeMetrics.java, 
metrics are declared as follows:

{code}
  @Metric(Duration in SafeMode at startup) MutableGaugeInt safeModeTime;
  @Metric(Time loading FS Image at startup) MutableGaugeInt fsImageLoadTime;
{code}

Since some users may confuse which unit (sec or msec) is correct, they should 
be documented.


 Document time unit to RpcMetrics.java
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.patch


 In o.a.h.hdfs.server.namenode.metrics.NameNodeMetrics.java, metrics are 
 declared as follows:
 {code}
   @Metric(Duration in SafeMode at startup) MutableGaugeInt safeModeTime;
   @Metric(Time loading FS Image at startup) MutableGaugeInt fsImageLoadTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9913) Document time unit to RpcMetrics.java

2013-08-29 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754243#comment-13754243
 ] 

Akira AJISAKA commented on HADOOP-9913:
---

I created HDFS-5144 for NameNodeMetrics.java.

 Document time unit to RpcMetrics.java
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.patch


 In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:
 {code}
@Metric(Queue time) MutableRate rpcQueueTime;
@Metric(Processsing time) MutableRate rpcProcessingTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9913) Document time unit to RpcMetrics.java

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9913:
--

Description: 
In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:

{code}
   @Metric(Queue time) MutableRate rpcQueueTime;
   @Metric(Processsing time) MutableRate rpcProcessingTime;
{code}

Since some users may confuse which unit (sec or msec) is correct, they should 
be documented.

  was:
In o.a.h.hdfs.server.namenode.metrics.NameNodeMetrics.java, metrics are 
declared as follows:

{code}
  @Metric(Duration in SafeMode at startup) MutableGaugeInt safeModeTime;
  @Metric(Time loading FS Image at startup) MutableGaugeInt fsImageLoadTime;
{code}

Since some users may confuse which unit (sec or msec) is correct, they should 
be documented.


 Document time unit to RpcMetrics.java
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.patch


 In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:
 {code}
@Metric(Queue time) MutableRate rpcQueueTime;
@Metric(Processsing time) MutableRate rpcProcessingTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9913) Document time unit to RpcMetrics.java

2013-08-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9913:
--

Status: Patch Available  (was: Open)

 Document time unit to RpcMetrics.java
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 2.1.0-beta, 3.0.0
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.patch


 In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:
 {code}
@Metric(Queue time) MutableRate rpcQueueTime;
@Metric(Processsing time) MutableRate rpcProcessingTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9909:


  Resolution: Fixed
   Fix Version/s: 2.3.0
  3.0.0
Target Version/s: 3.0.0, 2.3.0  (was: 3.0.0)
  Status: Resolved  (was: Patch Available)

Thanks Shinichi for the contribution and Akira for further testing. Committed 
to trunk and branch-2.

 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Fix For: 3.0.0, 2.3.0

 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch, HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:128)
 ... 27 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9909) org.apache.hadoop.fs.Stat should permit other LANG

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754257#comment-13754257
 ] 

Hudson commented on HADOOP-9909:


SUCCESS: Integrated in Hadoop-trunk-Commit #4349 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4349/])
HADOOP-9909. org.apache.hadoop.fs.Stat should permit other LANG. (Shinichi 
Yamashita via Andrew Wang) (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518862)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Stat.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestStat.java


 org.apache.hadoop.fs.Stat should permit other LANG
 --

 Key: HADOOP-9909
 URL: https://issues.apache.org/jira/browse/HADOOP-9909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
 Environment: CentOS 6.4 / LANG=ja_JP.UTF-8
Reporter: Shinichi Yamashita
 Fix For: 3.0.0, 2.3.0

 Attachments: console.log, HADOOP-9909.patch, HADOOP-9909.patch, 
 HADOOP-9909.patch, HADOOP-9909.patch, HADOOP-9909.patch


 I executed hdfs dfs -put command and displayed following warning message. 
 And hdfs dfs -put command was success.
 This is because Stat.parseExecResult() check a message of stat command from 
 only English.
 {code}
 [hadoop@trunk ~]$ hdfs dfs -put fugafuga.txt .
 13/08/27 16:24:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 13/08/27 16:24:37 WARN fs.FSInputChecker: Problem opening checksum file: 
 file:/home/hadoop/fugafuga.txt.  Ignoring exception:
 java.io.IOException: Unexpected stat output: stat: cannot stat 
 `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:163)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:489)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:68)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:806)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:738)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:210)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:143)
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:239)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:183)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
 at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:163)
 at 
 org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
 at 
 org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
 at 
 org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:140)
 at 
 org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:224)
 at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 Caused by: java.lang.NumberFormatException: For input string: stat: cannot 
 stat `/home/hadoop/.fugafuga.txt.crc': そのようなファイルやディレクトリはありません
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:441)
 at java.lang.Long.parseLong(Long.java:483)
 at 

[jira] [Commented] (HADOOP-9913) Document time unit to RpcMetrics.java

2013-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754261#comment-13754261
 ] 

Hadoop QA commented on HADOOP-9913:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600696/HADOOP-9913.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3041//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3041//console

This message is automatically generated.

 Document time unit to RpcMetrics.java
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.patch


 In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:
 {code}
@Metric(Queue time) MutableRate rpcQueueTime;
@Metric(Processsing time) MutableRate rpcProcessingTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9774) RawLocalFileSystem.listStatus() return absolute paths when input path is relative on Windows

2013-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754274#comment-13754274
 ] 

Hudson commented on HADOOP-9774:


SUCCESS: Integrated in Hadoop-trunk-Commit #4350 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4350/])
HADOOP-9774. RawLocalFileSystem.listStatus() return absolute paths when input 
path is relative on Windows. Contributed by Shanyu Zhao. (ivanmi: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518865)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestPath.java


 RawLocalFileSystem.listStatus() return absolute paths when input path is 
 relative on Windows
 

 Key: HADOOP-9774
 URL: https://issues.apache.org/jira/browse/HADOOP-9774
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HADOOP-9774-2.patch, HADOOP-9774-3.patch, 
 HADOOP-9774-4.patch, HADOOP-9774-5.patch, HADOOP-9774.patch


 On Windows, when using RawLocalFileSystem.listStatus() to enumerate a 
 relative path (without drive spec), e.g., file:///mydata, the resulting 
 paths become absolute paths, e.g., [file://E:/mydata/t1.txt, 
 file://E:/mydata/t2.txt...].
 Note that if we use it to enumerate an absolute path, e.g., 
 file://E:/mydata then the we get the same results as above.
 This breaks some hive unit tests which uses local file system to simulate 
 HDFS when testing, therefore the drive spec is removed. Then after 
 listStatus() the path is changed to absolute path, hive failed to find the 
 path in its map reduce job.
 You'll see the following exception:
 [junit] java.io.IOException: cannot find dir = 
 pfile:/E:/GitHub/hive-monarch/build/ql/test/data/warehouse/src/kv1.txt in 
 pathToPartitionInfo: 
 [pfile:/GitHub/hive-monarch/build/ql/test/data/warehouse/src]
 [junit]   at 
 org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:298)
 This problem is introduced by this JIRA:
 HADOOP-8962
 Prior to the fix for HADOOP-8962 (merged in 0.23.5), the resulting paths are 
 relative paths if the parent paths are relative, e.g., 
 [file:///mydata/t1.txt, file:///mydata/t2.txt...]
 This behavior change is a side effect of the fix in HADOOP-8962, not an 
 intended change. The resulting behavior, even though is legitimate from a 
 function point of view, break consistency from the caller's point of view. 
 When the caller use a relative path (without drive spec) to do listStatus() 
 the resulting path should be relative. Therefore, I think this should be 
 fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9774) RawLocalFileSystem.listStatus() return absolute paths when input path is relative on Windows

2013-08-29 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-9774:
---

   Resolution: Fixed
Fix Version/s: 2.1.1-beta
 Release Note: Committed to trunk, branch-2 and branch-2.1-beta. Thank you 
Shanyu for the contribution!
   Status: Resolved  (was: Patch Available)

 RawLocalFileSystem.listStatus() return absolute paths when input path is 
 relative on Windows
 

 Key: HADOOP-9774
 URL: https://issues.apache.org/jira/browse/HADOOP-9774
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: shanyu zhao
Assignee: shanyu zhao
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9774-2.patch, HADOOP-9774-3.patch, 
 HADOOP-9774-4.patch, HADOOP-9774-5.patch, HADOOP-9774.patch


 On Windows, when using RawLocalFileSystem.listStatus() to enumerate a 
 relative path (without drive spec), e.g., file:///mydata, the resulting 
 paths become absolute paths, e.g., [file://E:/mydata/t1.txt, 
 file://E:/mydata/t2.txt...].
 Note that if we use it to enumerate an absolute path, e.g., 
 file://E:/mydata then the we get the same results as above.
 This breaks some hive unit tests which uses local file system to simulate 
 HDFS when testing, therefore the drive spec is removed. Then after 
 listStatus() the path is changed to absolute path, hive failed to find the 
 path in its map reduce job.
 You'll see the following exception:
 [junit] java.io.IOException: cannot find dir = 
 pfile:/E:/GitHub/hive-monarch/build/ql/test/data/warehouse/src/kv1.txt in 
 pathToPartitionInfo: 
 [pfile:/GitHub/hive-monarch/build/ql/test/data/warehouse/src]
 [junit]   at 
 org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:298)
 This problem is introduced by this JIRA:
 HADOOP-8962
 Prior to the fix for HADOOP-8962 (merged in 0.23.5), the resulting paths are 
 relative paths if the parent paths are relative, e.g., 
 [file:///mydata/t1.txt, file:///mydata/t2.txt...]
 This behavior change is a side effect of the fix in HADOOP-8962, not an 
 intended change. The resulting behavior, even though is legitimate from a 
 function point of view, break consistency from the caller's point of view. 
 When the caller use a relative path (without drive spec) to do listStatus() 
 the resulting path should be relative. Therefore, I think this should be 
 fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9774) RawLocalFileSystem.listStatus() return absolute paths when input path is relative on Windows

2013-08-29 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-9774:
---

Release Note:   (was: Committed to trunk, branch-2 and branch-2.1-beta. 
Thank you Shanyu for the contribution!)

Committed to trunk, branch-2 and branch-2.1-beta. Thank you Shanyu for the 
contribution!

 RawLocalFileSystem.listStatus() return absolute paths when input path is 
 relative on Windows
 

 Key: HADOOP-9774
 URL: https://issues.apache.org/jira/browse/HADOOP-9774
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: shanyu zhao
Assignee: shanyu zhao
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9774-2.patch, HADOOP-9774-3.patch, 
 HADOOP-9774-4.patch, HADOOP-9774-5.patch, HADOOP-9774.patch


 On Windows, when using RawLocalFileSystem.listStatus() to enumerate a 
 relative path (without drive spec), e.g., file:///mydata, the resulting 
 paths become absolute paths, e.g., [file://E:/mydata/t1.txt, 
 file://E:/mydata/t2.txt...].
 Note that if we use it to enumerate an absolute path, e.g., 
 file://E:/mydata then the we get the same results as above.
 This breaks some hive unit tests which uses local file system to simulate 
 HDFS when testing, therefore the drive spec is removed. Then after 
 listStatus() the path is changed to absolute path, hive failed to find the 
 path in its map reduce job.
 You'll see the following exception:
 [junit] java.io.IOException: cannot find dir = 
 pfile:/E:/GitHub/hive-monarch/build/ql/test/data/warehouse/src/kv1.txt in 
 pathToPartitionInfo: 
 [pfile:/GitHub/hive-monarch/build/ql/test/data/warehouse/src]
 [junit]   at 
 org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:298)
 This problem is introduced by this JIRA:
 HADOOP-8962
 Prior to the fix for HADOOP-8962 (merged in 0.23.5), the resulting paths are 
 relative paths if the parent paths are relative, e.g., 
 [file:///mydata/t1.txt, file:///mydata/t2.txt...]
 This behavior change is a side effect of the fix in HADOOP-8962, not an 
 intended change. The resulting behavior, even though is legitimate from a 
 function point of view, break consistency from the caller's point of view. 
 When the caller use a relative path (without drive spec) to do listStatus() 
 the resulting path should be relative. Therefore, I think this should be 
 fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9791) Add a test case covering long paths for new FileUtil access check methods

2013-08-29 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754305#comment-13754305
 ] 

Ivan Mitic commented on HADOOP-9791:


Findbugs warnings are not related to the patch. 

 Add a test case covering long paths for new FileUtil access check methods
 -

 Key: HADOOP-9791
 URL: https://issues.apache.org/jira/browse/HADOOP-9791
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win, 2.1.0-beta
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-9791.branch-1-win.patch, HADOOP-9791.patch


 We've seen historically that paths longer than 260 chars can cause things not 
 to work on Windows if not properly handled. Filing a tracking Jira to add a 
 native io test case with long paths for new FileUtil access check methods 
 added with HADOOP-9413. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9907) Webapp http://hostname:port/metrics link is not working

2013-08-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754358#comment-13754358
 ] 

Xuan Gong commented on HADOOP-9907:
---

{code}
addServlet(metrics, /metrics, MetricsServlet.class);
{code}

Does this line mean we have already added the metrics into httpServer ?

 Webapp http://hostname:port/metrics  link is not working 
 -

 Key: HADOOP-9907
 URL: https://issues.apache.org/jira/browse/HADOOP-9907
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He
Priority: Blocker

 This link is not working which just shows a blank page.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik reassigned HADOOP-9917:


Assignee: Roman Shaposhnik

 cryptic warning when killing a job running with yarn
 

 Key: HADOOP-9917
 URL: https://issues.apache.org/jira/browse/HADOOP-9917
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Assignee: Roman Shaposhnik
Priority: Minor
 Attachments: HADOOP-9917.patch.txt


 When I am killing a job like this
 hadoop job -kill jobid
 I am getting a cryptic warning, which I don't really understand:
 DEPRECATED: Use of this script to execute mapred command is deprecated.
 Instead use the mapred command for it.
 I fail parsing this and I believe many others will do too. Please make this 
 warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9917:
-

Attachment: HADOOP-9917.patch.txt

 cryptic warning when killing a job running with yarn
 

 Key: HADOOP-9917
 URL: https://issues.apache.org/jira/browse/HADOOP-9917
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Priority: Minor
 Attachments: HADOOP-9917.patch.txt


 When I am killing a job like this
 hadoop job -kill jobid
 I am getting a cryptic warning, which I don't really understand:
 DEPRECATED: Use of this script to execute mapred command is deprecated.
 Instead use the mapred command for it.
 I fail parsing this and I believe many others will do too. Please make this 
 warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9917:
-

Status: Patch Available  (was: Open)

 cryptic warning when killing a job running with yarn
 

 Key: HADOOP-9917
 URL: https://issues.apache.org/jira/browse/HADOOP-9917
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Priority: Minor
 Attachments: HADOOP-9917.patch.txt


 When I am killing a job like this
 hadoop job -kill jobid
 I am getting a cryptic warning, which I don't really understand:
 DEPRECATED: Use of this script to execute mapred command is deprecated.
 Instead use the mapred command for it.
 I fail parsing this and I believe many others will do too. Please make this 
 warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754369#comment-13754369
 ] 

Roman Shaposhnik commented on HADOOP-9917:
--

The cli change simply reflects the fact that 3 parts of Hadoop 2.x (HDFS, YARN, 
MR) are actually independent of each other. To a degree that one can, lets say, 
use YARN but not MR, or even use YARN/MR over an alternative filesystem 
implementation. Hence split of functionality into separate scripts.

 cryptic warning when killing a job running with yarn
 

 Key: HADOOP-9917
 URL: https://issues.apache.org/jira/browse/HADOOP-9917
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Assignee: Roman Shaposhnik
Priority: Minor
 Attachments: HADOOP-9917.patch.txt


 When I am killing a job like this
 hadoop job -kill jobid
 I am getting a cryptic warning, which I don't really understand:
 DEPRECATED: Use of this script to execute mapred command is deprecated.
 Instead use the mapred command for it.
 I fail parsing this and I believe many others will do too. Please make this 
 warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9907) Webapp http://hostname:port/metrics link is not working

2013-08-29 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754388#comment-13754388
 ] 

Harsh J commented on HADOOP-9907:
-

Wasn't the /jmx endpoint supposed to replace the /metrics going forward?

@Xuan - I believe that servlet only serves MetricsV1, not V2?

 Webapp http://hostname:port/metrics  link is not working 
 -

 Key: HADOOP-9907
 URL: https://issues.apache.org/jira/browse/HADOOP-9907
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He
Priority: Blocker

 This link is not working which just shows a blank page.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754392#comment-13754392
 ] 

Hadoop QA commented on HADOOP-9917:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600726/HADOOP-9917.patch.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-common-project/hadoop-common:

org.apache.hadoop.ipc.TestIPC

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3042//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3042//console

This message is automatically generated.

 cryptic warning when killing a job running with yarn
 

 Key: HADOOP-9917
 URL: https://issues.apache.org/jira/browse/HADOOP-9917
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Assignee: Roman Shaposhnik
Priority: Minor
 Attachments: HADOOP-9917.patch.txt


 When I am killing a job like this
 hadoop job -kill jobid
 I am getting a cryptic warning, which I don't really understand:
 DEPRECATED: Use of this script to execute mapred command is deprecated.
 Instead use the mapred command for it.
 I fail parsing this and I believe many others will do too. Please make this 
 warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira