[jira] [Assigned] (HADOOP-10040) hadoop.cmd in UNIX format and would not run by default on Windows

2013-10-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HADOOP-10040:
--

Assignee: Chris Nauroth

 hadoop.cmd in UNIX format and would not run by default on Windows
 -

 Key: HADOOP-10040
 URL: https://issues.apache.org/jira/browse/HADOOP-10040
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yingda Chen
Assignee: Chris Nauroth

 The hadoop.cmd currently checked in into hadoop-common is in UNIX format, 
 same as most of other src files. However, the hadoop.cmd is meant to be used 
 on Windows only, the fact that it is in UNIX format makes it unrunnable as is 
 on Window platform.
 An exception shall be made on hadoop.cmd (and other cmd files for what 
 matters) to make sure they are in DOS format, for them to be runnable as is 
 when checked out from source repository.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10040) hadoop.cmd in UNIX format and would not run by default on Windows

2013-10-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792388#comment-13792388
 ] 

Chris Nauroth commented on HADOOP-10040:


Hi, [~yingdachen].  I'm guessing that you had checked out the code from 
Subversion and not Git.  Can you please confirm?

We've set up line ending exceptions like this already for Git, covering *.bat, 
*.cmd, *.csproj, and *.sln.  This is specified in .gitattributes:

{code}
*.battext eol=crlf
*.cmdtext eol=crlf
*.csproj text merge=union eol=crlf
*.slntext merge=union eol=crlf
{code}

This is working correctly for me on a checkout from git.  Here is a small 
partial hex dump of hadoop.cmd showing 0D 0A for line endings.

{code}
000: 4065 6368 6f20 6f66 660d 0a40 7265 6d20  @echo off..@rem 
010: 4c69 6365 6e73 6564 2074 6f20 7468 6520  Licensed to the 
020: 4170 6163 6865 2053 6f66 7477 6172 6520  Apache Software 
030: 466f 756e 6461 7469 6f6e 2028 4153 4629  Foundation (ASF)
040: 2075 6e64 6572 206f 6e65 206f 7220 6d6f   under one or mo
050: 7265 0d0a 4072 656d 2063 6f6e 7472 6962  re..@rem contrib
{code}

However, for my checkout of the Subversion repo, I see that I'm getting 0A for 
the line endings:

{code}
000: 4065 6368 6f20 6f66 660a 4072 656d 204c  @echo off.@rem L
010: 6963 656e 7365 6420 746f 2074 6865 2041  icensed to the A
020: 7061 6368 6520 536f 6674 7761 7265 2046  pache Software F
030: 6f75 6e64 6174 696f 6e20 2841 5346 2920  oundation (ASF) 
040: 756e 6465 7220 6f6e 6520 6f72 206d 6f72  under one or mor
050: 650a 4072 656d 2063 6f6e 7472 6962 7574  e.@rem contribut
{code}

I guess we'll need to apply these rules separately for the Subversion repo by 
running the appropriate svn propset svn:eol-style commands.

 hadoop.cmd in UNIX format and would not run by default on Windows
 -

 Key: HADOOP-10040
 URL: https://issues.apache.org/jira/browse/HADOOP-10040
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yingda Chen

 The hadoop.cmd currently checked in into hadoop-common is in UNIX format, 
 same as most of other src files. However, the hadoop.cmd is meant to be used 
 on Windows only, the fact that it is in UNIX format makes it unrunnable as is 
 on Window platform.
 An exception shall be made on hadoop.cmd (and other cmd files for what 
 matters) to make sure they are in DOS format, for them to be runnable as is 
 when checked out from source repository.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10039) Add Hive to the list of projects using AbstractDelegationTokenSecretManager

2013-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792520#comment-13792520
 ] 

Hudson commented on HADOOP-10039:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #359 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/359/])
HADOOP-10039. Add Hive to the list of projects using 
AbstractDelegationTokenSecretManager. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531158)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java


 Add Hive to the list of projects using AbstractDelegationTokenSecretManager
 ---

 Key: HADOOP-10039
 URL: https://issues.apache.org/jira/browse/HADOOP-10039
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Haohui Mai
 Fix For: 2.2.1

 Attachments: HDFS-5340.000.patch


 org.apache.hadoop.hive.thrift.DelegationTokenSecretManager extends 
 AbstractDelegationTokenSecretManager. This should be captured in the 
 InterfaceAudience annotation of AbstractDelegationTokenSecretManager.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10029) Specifying har file to MR job fails in secure cluster

2013-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792521#comment-13792521
 ] 

Hudson commented on HADOOP-10029:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #359 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/359/])
HADOOP-10029. Specifying har file to MR job fails in secure cluster. 
Contributed by Suresh Srinivas. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531125)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java


 Specifying har file to MR job fails in secure cluster
 -

 Key: HADOOP-10029
 URL: https://issues.apache.org/jira/browse/HADOOP-10029
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 2.2.1

 Attachments: HADOOP-10029.1.patch, HADOOP-10029.2.patch, 
 HADOOP-10029.3.patch, HADOOP-10029.4.patch, HADOOP-10029.4.patch, 
 HADOOP-10029.5.patch, HADOOP-10029.6.patch, HADOOP-10029.patch


 This is an issue found by [~rramya]. See the exception stack trace in the 
 following comment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10042) Heap space error during copy from maptask to reduce task

2013-10-11 Thread Dieter De Witte (JIRA)
Dieter De Witte created HADOOP-10042:


 Summary: Heap space error during copy from maptask to reduce task
 Key: HADOOP-10042
 URL: https://issues.apache.org/jira/browse/HADOOP-10042
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.2.1
 Environment: Ubuntu cluster
Reporter: Dieter De Witte
 Fix For: 1.2.1


http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase

I've described the problem on stackoverflow as well. It contains a link to 
another JIRA: 
http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html

My errors are completely the same: out of memory error when 
mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I put 
it to 0.2, does this mean the original JIRA was not resolved?

Does anybody have an idea whether this is a mapreduce issue or is it a 
misconfiguration from my part?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10042) Heap space error during copy from maptask to reduce task

2013-10-11 Thread Dieter De Witte (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dieter De Witte updated HADOOP-10042:
-

Attachment: mapred-site.OLDxml

mapred-site.xml - configuration

 Heap space error during copy from maptask to reduce task
 

 Key: HADOOP-10042
 URL: https://issues.apache.org/jira/browse/HADOOP-10042
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.2.1
 Environment: Ubuntu cluster
Reporter: Dieter De Witte
 Fix For: 1.2.1

 Attachments: mapred-site.OLDxml


 http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase
 I've described the problem on stackoverflow as well. It contains a link to 
 another JIRA: 
 http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html
 My errors are completely the same: out of memory error when 
 mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I 
 put it to 0.2, does this mean the original JIRA was not resolved?
 Does anybody have an idea whether this is a mapreduce issue or is it a 
 misconfiguration from my part?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10039) Add Hive to the list of projects using AbstractDelegationTokenSecretManager

2013-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792602#comment-13792602
 ] 

Hudson commented on HADOOP-10039:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1549 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1549/])
HADOOP-10039. Add Hive to the list of projects using 
AbstractDelegationTokenSecretManager. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531158)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java


 Add Hive to the list of projects using AbstractDelegationTokenSecretManager
 ---

 Key: HADOOP-10039
 URL: https://issues.apache.org/jira/browse/HADOOP-10039
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Haohui Mai
 Fix For: 2.2.1

 Attachments: HDFS-5340.000.patch


 org.apache.hadoop.hive.thrift.DelegationTokenSecretManager extends 
 AbstractDelegationTokenSecretManager. This should be captured in the 
 InterfaceAudience annotation of AbstractDelegationTokenSecretManager.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10029) Specifying har file to MR job fails in secure cluster

2013-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792603#comment-13792603
 ] 

Hudson commented on HADOOP-10029:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1549 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1549/])
HADOOP-10029. Specifying har file to MR job fails in secure cluster. 
Contributed by Suresh Srinivas. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531125)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java


 Specifying har file to MR job fails in secure cluster
 -

 Key: HADOOP-10029
 URL: https://issues.apache.org/jira/browse/HADOOP-10029
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 2.2.1

 Attachments: HADOOP-10029.1.patch, HADOOP-10029.2.patch, 
 HADOOP-10029.3.patch, HADOOP-10029.4.patch, HADOOP-10029.4.patch, 
 HADOOP-10029.5.patch, HADOOP-10029.6.patch, HADOOP-10029.patch


 This is an issue found by [~rramya]. See the exception stack trace in the 
 following comment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10043) Convert org.apache.hadoop.security.token.SecretManager to be an AbstractService

2013-10-11 Thread Tsuyoshi OZAWA (JIRA)
Tsuyoshi OZAWA created HADOOP-10043:
---

 Summary: Convert org.apache.hadoop.security.token.SecretManager to 
be an AbstractService
 Key: HADOOP-10043
 URL: https://issues.apache.org/jira/browse/HADOOP-10043
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA


I'm dealing with YARN-1172, a subtask of YARN-1139(ResourceManager HA related 
task). The sentence as follows is a quoted from YARN-1172's my comment:
{quote}
I've found that it requires org.apache.hadoop.security.token.SecretManager to 
be an AbstractService,
because both AbstractService and org.apache.hadoop.security.token.SecretManager 
are abstract class and we cannot extend both of them at the same time.
{quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10040) hadoop.cmd in UNIX format and would not run by default on Windows

2013-10-11 Thread Yingda Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792629#comment-13792629
 ] 

Yingda Chen commented on HADOOP-10040:
--

yes. we see it on trunk checked out from svn, not git checek out.



 hadoop.cmd in UNIX format and would not run by default on Windows
 -

 Key: HADOOP-10040
 URL: https://issues.apache.org/jira/browse/HADOOP-10040
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yingda Chen
Assignee: Chris Nauroth

 The hadoop.cmd currently checked in into hadoop-common is in UNIX format, 
 same as most of other src files. However, the hadoop.cmd is meant to be used 
 on Windows only, the fact that it is in UNIX format makes it unrunnable as is 
 on Window platform.
 An exception shall be made on hadoop.cmd (and other cmd files for what 
 matters) to make sure they are in DOS format, for them to be runnable as is 
 when checked out from source repository.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10039) Add Hive to the list of projects using AbstractDelegationTokenSecretManager

2013-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792638#comment-13792638
 ] 

Hudson commented on HADOOP-10039:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1575 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1575/])
HADOOP-10039. Add Hive to the list of projects using 
AbstractDelegationTokenSecretManager. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531158)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java


 Add Hive to the list of projects using AbstractDelegationTokenSecretManager
 ---

 Key: HADOOP-10039
 URL: https://issues.apache.org/jira/browse/HADOOP-10039
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Haohui Mai
 Fix For: 2.2.1

 Attachments: HDFS-5340.000.patch


 org.apache.hadoop.hive.thrift.DelegationTokenSecretManager extends 
 AbstractDelegationTokenSecretManager. This should be captured in the 
 InterfaceAudience annotation of AbstractDelegationTokenSecretManager.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10029) Specifying har file to MR job fails in secure cluster

2013-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792639#comment-13792639
 ] 

Hudson commented on HADOOP-10029:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1575 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1575/])
HADOOP-10029. Specifying har file to MR job fails in secure cluster. 
Contributed by Suresh Srinivas. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531125)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java


 Specifying har file to MR job fails in secure cluster
 -

 Key: HADOOP-10029
 URL: https://issues.apache.org/jira/browse/HADOOP-10029
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 2.2.1

 Attachments: HADOOP-10029.1.patch, HADOOP-10029.2.patch, 
 HADOOP-10029.3.patch, HADOOP-10029.4.patch, HADOOP-10029.4.patch, 
 HADOOP-10029.5.patch, HADOOP-10029.6.patch, HADOOP-10029.patch


 This is an issue found by [~rramya]. See the exception stack trace in the 
 following comment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10043) Convert org.apache.hadoop.security.token.SecretManager to be an AbstractService

2013-10-11 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-10043:


Status: Patch Available  (was: Open)

 Convert org.apache.hadoop.security.token.SecretManager to be an 
 AbstractService
 ---

 Key: HADOOP-10043
 URL: https://issues.apache.org/jira/browse/HADOOP-10043
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: HADOOP-10043.1.patch


 I'm dealing with YARN-1172, a subtask of YARN-1139(ResourceManager HA related 
 task). The sentence as follows is a quoted from YARN-1172's my comment:
 {quote}
 I've found that it requires org.apache.hadoop.security.token.SecretManager to 
 be an AbstractService,
 because both AbstractService and 
 org.apache.hadoop.security.token.SecretManager are abstract class and we 
 cannot extend both of them at the same time.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10043) Convert org.apache.hadoop.security.token.SecretManager to be an AbstractService

2013-10-11 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-10043:


Attachment: HADOOP-10043.1.patch

 Convert org.apache.hadoop.security.token.SecretManager to be an 
 AbstractService
 ---

 Key: HADOOP-10043
 URL: https://issues.apache.org/jira/browse/HADOOP-10043
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: HADOOP-10043.1.patch


 I'm dealing with YARN-1172, a subtask of YARN-1139(ResourceManager HA related 
 task). The sentence as follows is a quoted from YARN-1172's my comment:
 {quote}
 I've found that it requires org.apache.hadoop.security.token.SecretManager to 
 be an AbstractService,
 because both AbstractService and 
 org.apache.hadoop.security.token.SecretManager are abstract class and we 
 cannot extend both of them at the same time.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10042) Heap space error during copy from maptask to reduce task

2013-10-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792729#comment-13792729
 ] 

Suresh Srinivas commented on HADOOP-10042:
--

Jira is for reporting bugs. Not for asking questions. Please use user mailing 
list for questions. See http://hadoop.apache.org/mailing_lists.html

 Heap space error during copy from maptask to reduce task
 

 Key: HADOOP-10042
 URL: https://issues.apache.org/jira/browse/HADOOP-10042
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.2.1
 Environment: Ubuntu cluster
Reporter: Dieter De Witte
 Fix For: 1.2.1

 Attachments: mapred-site.OLDxml


 http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase
 I've described the problem on stackoverflow as well. It contains a link to 
 another JIRA: 
 http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html
 My errors are completely the same: out of memory error when 
 mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I 
 put it to 0.2, does this mean the original JIRA was not resolved?
 Does anybody have an idea whether this is a mapreduce issue or is it a 
 misconfiguration from my part?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-10042) Heap space error during copy from maptask to reduce task

2013-10-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-10042.
--

Resolution: Invalid

 Heap space error during copy from maptask to reduce task
 

 Key: HADOOP-10042
 URL: https://issues.apache.org/jira/browse/HADOOP-10042
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.2.1
 Environment: Ubuntu cluster
Reporter: Dieter De Witte
 Fix For: 1.2.1

 Attachments: mapred-site.OLDxml


 http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase
 I've described the problem on stackoverflow as well. It contains a link to 
 another JIRA: 
 http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html
 My errors are completely the same: out of memory error when 
 mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I 
 put it to 0.2, does this mean the original JIRA was not resolved?
 Does anybody have an idea whether this is a mapreduce issue or is it a 
 misconfiguration from my part?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10042) Heap space error during copy from maptask to reduce task

2013-10-11 Thread Dieter De Witte (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792750#comment-13792750
 ] 

Dieter De Witte commented on HADOOP-10042:
--

But I think it's a bug (see my reference to other JIRA), If someone can confirm 
it isn't than that's sufficient to me. I've solved it by changing a parameter 
which shouldn't have an effect! So this implies that something is wrong with 
this parameter!

 Heap space error during copy from maptask to reduce task
 

 Key: HADOOP-10042
 URL: https://issues.apache.org/jira/browse/HADOOP-10042
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.2.1
 Environment: Ubuntu cluster
Reporter: Dieter De Witte
 Fix For: 1.2.1

 Attachments: mapred-site.OLDxml


 http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase
 I've described the problem on stackoverflow as well. It contains a link to 
 another JIRA: 
 http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html
 My errors are completely the same: out of memory error when 
 mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I 
 put it to 0.2, does this mean the original JIRA was not resolved?
 Does anybody have an idea whether this is a mapreduce issue or is it a 
 misconfiguration from my part?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9494) Excluded auto-generated and examples code from clover reports

2013-10-11 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated HADOOP-9494:


Attachment: HADOOP-9494--n5.patch

Updating the patch

 Excluded auto-generated and examples code from clover reports
 -

 Key: HADOOP-9494
 URL: https://issues.apache.org/jira/browse/HADOOP-9494
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Dennis Y
Assignee: Andrey Klochkov
 Attachments: HADOOP-9494--n5.patch, HADOOP-9494-trunk--N3.patch, 
 HADOOP-9494-trunk--N4.patch


 applicable to branch-0.23, branch-2, trunk



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9494) Excluded auto-generated and examples code from clover reports

2013-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792820#comment-13792820
 ] 

Hadoop QA commented on HADOOP-9494:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12608029/HADOOP-9494--n5.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3207//console

This message is automatically generated.

 Excluded auto-generated and examples code from clover reports
 -

 Key: HADOOP-9494
 URL: https://issues.apache.org/jira/browse/HADOOP-9494
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Dennis Y
Assignee: Andrey Klochkov
 Attachments: HADOOP-9494--n5.patch, HADOOP-9494-trunk--N3.patch, 
 HADOOP-9494-trunk--N4.patch


 applicable to branch-0.23, branch-2, trunk



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9494) Excluded auto-generated and examples code from clover reports

2013-10-11 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated HADOOP-9494:


Attachment: HADOOP-9494--n6.patch

 Excluded auto-generated and examples code from clover reports
 -

 Key: HADOOP-9494
 URL: https://issues.apache.org/jira/browse/HADOOP-9494
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Dennis Y
Assignee: Andrey Klochkov
 Attachments: HADOOP-9494--n5.patch, HADOOP-9494--n6.patch, 
 HADOOP-9494-trunk--N3.patch, HADOOP-9494-trunk--N4.patch


 applicable to branch-0.23, branch-2, trunk



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10043) Convert org.apache.hadoop.security.token.SecretManager to be an AbstractService

2013-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792833#comment-13792833
 ] 

Hadoop QA commented on HADOOP-10043:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12608007/HADOOP-10043.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-httpfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3206//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3206//console

This message is automatically generated.

 Convert org.apache.hadoop.security.token.SecretManager to be an 
 AbstractService
 ---

 Key: HADOOP-10043
 URL: https://issues.apache.org/jira/browse/HADOOP-10043
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: HADOOP-10043.1.patch


 I'm dealing with YARN-1172, a subtask of YARN-1139(ResourceManager HA related 
 task). The sentence as follows is a quoted from YARN-1172's my comment:
 {quote}
 I've found that it requires org.apache.hadoop.security.token.SecretManager to 
 be an AbstractService,
 because both AbstractService and 
 org.apache.hadoop.security.token.SecretManager are abstract class and we 
 cannot extend both of them at the same time.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10042) Heap space error during copy from maptask to reduce task

2013-10-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792852#comment-13792852
 ] 

Suresh Srinivas commented on HADOOP-10042:
--

bq. But I think it's a bug (see my reference to other JIRA)
Sorry I could not find it. What is the jira number?

 Heap space error during copy from maptask to reduce task
 

 Key: HADOOP-10042
 URL: https://issues.apache.org/jira/browse/HADOOP-10042
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.2.1
 Environment: Ubuntu cluster
Reporter: Dieter De Witte
 Fix For: 1.2.1

 Attachments: mapred-site.OLDxml


 http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase
 I've described the problem on stackoverflow as well. It contains a link to 
 another JIRA: 
 http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html
 My errors are completely the same: out of memory error when 
 mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I 
 put it to 0.2, does this mean the original JIRA was not resolved?
 Does anybody have an idea whether this is a mapreduce issue or is it a 
 misconfiguration from my part?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9494) Excluded auto-generated and examples code from clover reports

2013-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792854#comment-13792854
 ] 

Hadoop QA commented on HADOOP-9494:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12608031/HADOOP-9494--n6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3208//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3208//console

This message is automatically generated.

 Excluded auto-generated and examples code from clover reports
 -

 Key: HADOOP-9494
 URL: https://issues.apache.org/jira/browse/HADOOP-9494
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Dennis Y
Assignee: Andrey Klochkov
 Attachments: HADOOP-9494--n5.patch, HADOOP-9494--n6.patch, 
 HADOOP-9494-trunk--N3.patch, HADOOP-9494-trunk--N4.patch


 applicable to branch-0.23, branch-2, trunk



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10040) hadoop.cmd in UNIX format and would not run by default on Windows

2013-10-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793118#comment-13793118
 ] 

Chris Nauroth commented on HADOOP-10040:


There isn't really a patch or a code review involved in fixing this.  Just to 
give everyone a heads-up though, I'm planning on running the following script 
in the Subversion repos for trunk, branch-2, and branch-2.2.  This will convert 
to CRLF line endings for Windows-related files.  I'll wait a few hours before 
doing this in case anyone has questions on the change.

{code}
for EXT in bat cmd vcxproj sln; do svn propset -R svn:eol-style CRLF *.$EXT; 
done
{code}


 hadoop.cmd in UNIX format and would not run by default on Windows
 -

 Key: HADOOP-10040
 URL: https://issues.apache.org/jira/browse/HADOOP-10040
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yingda Chen
Assignee: Chris Nauroth

 The hadoop.cmd currently checked in into hadoop-common is in UNIX format, 
 same as most of other src files. However, the hadoop.cmd is meant to be used 
 on Windows only, the fact that it is in UNIX format makes it unrunnable as is 
 on Window platform.
 An exception shall be made on hadoop.cmd (and other cmd files for what 
 matters) to make sure they are in DOS format, for them to be runnable as is 
 when checked out from source repository.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10044) Improve the javadoc of rpc code

2013-10-11 Thread Sanjay Radia (JIRA)
Sanjay Radia created HADOOP-10044:
-

 Summary: Improve the javadoc of rpc code
 Key: HADOOP-10044
 URL: https://issues.apache.org/jira/browse/HADOOP-10044
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sanjay Radia
Assignee: Sanjay Radia
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10044) Improve the javadoc of rpc code

2013-10-11 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793139#comment-13793139
 ] 

Sanjay Radia commented on HADOOP-10044:
---

The hadoop rpc code especially the code in Server.java is fairly complicated 
and poorly documented. Everytime I make changes there or try and debug an 
issue, I have relearn parts of the code. The javadoc needs to be improved.

 Improve the javadoc of rpc code
 ---

 Key: HADOOP-10044
 URL: https://issues.apache.org/jira/browse/HADOOP-10044
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sanjay Radia
Assignee: Sanjay Radia
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10044) Improve the javadoc of rpc code

2013-10-11 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-10044:
--

Attachment: hadoop-10044.patch

 Improve the javadoc of rpc code
 ---

 Key: HADOOP-10044
 URL: https://issues.apache.org/jira/browse/HADOOP-10044
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sanjay Radia
Assignee: Sanjay Radia
Priority: Minor
 Attachments: hadoop-10044.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10044) Improve the javadoc of rpc code

2013-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793199#comment-13793199
 ] 

Hadoop QA commented on HADOOP-10044:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12608119/hadoop-10044.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3209//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3209//console

This message is automatically generated.

 Improve the javadoc of rpc code
 ---

 Key: HADOOP-10044
 URL: https://issues.apache.org/jira/browse/HADOOP-10044
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sanjay Radia
Assignee: Sanjay Radia
Priority: Minor
 Attachments: hadoop-10044.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-10040) hadoop.cmd in UNIX format and would not run by default on Windows

2013-10-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-10040.


  Resolution: Fixed
   Fix Version/s: 2.2.1
  3.0.0
Target Version/s: 3.0.0, 2.2.1

I have applied the line ending changes in trunk, branch-2, and branch-2.2.  
[~yingdachen], thank you for the bug report.

 hadoop.cmd in UNIX format and would not run by default on Windows
 -

 Key: HADOOP-10040
 URL: https://issues.apache.org/jira/browse/HADOOP-10040
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yingda Chen
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.2.1


 The hadoop.cmd currently checked in into hadoop-common is in UNIX format, 
 same as most of other src files. However, the hadoop.cmd is meant to be used 
 on Windows only, the fact that it is in UNIX format makes it unrunnable as is 
 on Window platform.
 An exception shall be made on hadoop.cmd (and other cmd files for what 
 matters) to make sure they are in DOS format, for them to be runnable as is 
 when checked out from source repository.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10040) hadoop.cmd in UNIX format and would not run by default on Windows

2013-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793227#comment-13793227
 ] 

Hudson commented on HADOOP-10040:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #4590 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4590/])
HADOOP-10040. hadoop.cmd in UNIX format and would not run by default on 
Windows. Contributed by Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531491)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.cmd
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/start-all.cmd
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/stop-all.cmd
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.cmd
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/native.sln
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.vcxproj
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/winutils.sln
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs-config.cmd
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.cmd
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-dfs.cmd
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mapred-config.cmd
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mapred.cmd
* /hadoop/common/trunk/hadoop-mapreduce-project/conf/mapred-env.cmd
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/bin/cat.cmd
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/bin/xargs_cat.cmd
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.cmd
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.cmd
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.cmd
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.cmd


 hadoop.cmd in UNIX format and would not run by default on Windows
 -

 Key: HADOOP-10040
 URL: https://issues.apache.org/jira/browse/HADOOP-10040
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yingda Chen
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.2.1


 The hadoop.cmd currently checked in into hadoop-common is in UNIX format, 
 same as most of other src files. However, the hadoop.cmd is meant to be used 
 on Windows only, the fact that it is in UNIX format makes it unrunnable as is 
 on Window platform.
 An exception shall be made on hadoop.cmd (and other cmd files for what 
 matters) to make sure they are in DOS format, for them to be runnable as is 
 when checked out from source repository.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10040) hadoop.cmd in UNIX format and would not run by default on Windows

2013-10-11 Thread Yingda Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793228#comment-13793228
 ] 

Yingda Chen commented on HADOOP-10040:
--

thanks for the fast turn-around in fixing this.

 hadoop.cmd in UNIX format and would not run by default on Windows
 -

 Key: HADOOP-10040
 URL: https://issues.apache.org/jira/browse/HADOOP-10040
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yingda Chen
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.2.1


 The hadoop.cmd currently checked in into hadoop-common is in UNIX format, 
 same as most of other src files. However, the hadoop.cmd is meant to be used 
 on Windows only, the fact that it is in UNIX format makes it unrunnable as is 
 on Window platform.
 An exception shall be made on hadoop.cmd (and other cmd files for what 
 matters) to make sure they are in DOS format, for them to be runnable as is 
 when checked out from source repository.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9623) Update jets3t dependency

2013-10-11 Thread Amandeep Khurana (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amandeep Khurana updated HADOOP-9623:
-

Target Version/s: 2.1.0-beta, 3.0.0  (was: 3.0.0, 2.1.0-beta)
  Status: Patch Available  (was: Open)

Ran into some dependency issues with httpcore with the existing patch. Fixing 
that and adding some exception handling and logging.

 Update jets3t dependency
 

 Key: HADOOP-9623
 URL: https://issues.apache.org/jira/browse/HADOOP-9623
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.1.0-beta, 3.0.0
Reporter: Timothy St. Clair
  Labels: maven
 Attachments: HADOOP-9623.patch, HADOOP-9623.patch


 Current version referenced in pom is 0.6.1 (Aug 2008), updating to 0.9.0 
 enables mvn-rpmbuild to build against system dependencies. 
 http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (HADOOP-10040) hadoop.cmd in UNIX format and would not run by default on Windows

2013-10-11 Thread Luke Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Lu reopened HADOOP-10040:
--


Woah, this completely mess up git.

Short answer: you should svn propset windows file as eol-style *native*

Long answer: in order for .gitattributes to work correctly with eol attributes, 
all text file with eol attributes are stored as with LF in the repository and 
converted to the value of eol upon checkout. This is not compatible with svn 
eol-style CRLF, which change the content in the repository as well. With svn 
eol-style native, an svn checkout will convert normalized text files (stored 
with LF) to CRLF.

I committed a workaround (to trunk and branch-2, so people can work with git) 
with .gitattributes for windows file as binary, so git won't touch them.

 hadoop.cmd in UNIX format and would not run by default on Windows
 -

 Key: HADOOP-10040
 URL: https://issues.apache.org/jira/browse/HADOOP-10040
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yingda Chen
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.2.1


 The hadoop.cmd currently checked in into hadoop-common is in UNIX format, 
 same as most of other src files. However, the hadoop.cmd is meant to be used 
 on Windows only, the fact that it is in UNIX format makes it unrunnable as is 
 on Window platform.
 An exception shall be made on hadoop.cmd (and other cmd files for what 
 matters) to make sure they are in DOS format, for them to be runnable as is 
 when checked out from source repository.



--
This message was sent by Atlassian JIRA
(v6.1#6144)