[jira] [Commented] (HADOOP-9622) bzip2 codec can drop records when reading data in splits

2013-11-19 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826352#comment-13826352
 ] 

Chris Douglas commented on HADOOP-9622:
---

bq. I'm tempted to handle this as a separate JIRA since I believe this will be 
an issue only with uncompressed inputs after this patch.

Yeah, that makes sense. Particularly since this issue covers the codec and the 
custom delimiter bug is in in the text processing. Thanks for looking into it.

bq. With this patch I think we have this case covered for compressed input due 
to the needAdditionalRecordAfterSplit logic.

I... think that's true. We can think about it in the followup.

 bzip2 codec can drop records when reading data in splits
 

 Key: HADOOP-9622
 URL: https://issues.apache.org/jira/browse/HADOOP-9622
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.4-alpha, 0.23.8
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9622-2.patch, HADOOP-9622-testcase.patch, 
 HADOOP-9622.patch, blockEndingInCR.txt.bz2, blockEndingInCRThenLF.txt.bz2


 Bzip2Codec.BZip2CompressionInputStream can cause records to be dropped when 
 reading them in splits based on where record delimiters occur relative to 
 compression block boundaries.
 Thanks to [~knoguchi] for discovering this problem while working on PIG-3251.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10113) There are some threads which will be dead silently when uncaught exception/error occurs

2013-11-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826381#comment-13826381
 ] 

Steve Loughran commented on HADOOP-10113:
-

In a past project I've had a thread base class that would be set up to send a 
callback on completion (with any exception); with something like that here a 
handler could be set up in each of the parents  let them deal with it. They do 
need to differentiate planned exit from unplanned exit though, which is 
straightforward in a YARN service, less consistent for the others

 There are some threads which will be dead silently when uncaught 
 exception/error occurs
 ---

 Key: HADOOP-10113
 URL: https://issues.apache.org/jira/browse/HADOOP-10113
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
 Fix For: 3.0.0


 Related to HDFS-5500, I found there are some threads be dead silently when 
 uncaught exception/error occured.
 For example, following threads are I mentioned.
 * refreshUsed in DU
 * reloader in ReloadingX509TrustManager
 * t in UserGroupInformation#spawnAutoRenewalThreadForUserCreds
 * errThread in Shell#runCommand
 * sinkThread in MetricsSinkAdapter
 * blockScannerThread in DataBlockScanner
 * emptier in NameNode#startTrashEmptier (when we use TrashPolicyDefault) 
 There are some critical threads if we can't notice the dead (e.g DU). I think 
 we should handle those exception/error, and monitor the liveness or log that.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10103) update commons-lang to 2.6

2013-11-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10103:
---

Attachment: HADOOP-10103.patch

Attaching a patch to fix pom.xml.
[~ste...@apache.org], is there anything to do for updating jar except editing 
pom?

 update commons-lang to 2.6
 --

 Key: HADOOP-10103
 URL: https://issues.apache.org/jira/browse/HADOOP-10103
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Steve Loughran
Priority: Minor
 Attachments: HADOOP-10103.patch


 update commons-lang from 2.4 to 2.6



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10103) update commons-lang to 2.6

2013-11-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10103:
---

 Assignee: Akira AJISAKA
 Target Version/s: 3.0.0
Affects Version/s: 2.3.0
   Status: Patch Available  (was: Open)

 update commons-lang to 2.6
 --

 Key: HADOOP-10103
 URL: https://issues.apache.org/jira/browse/HADOOP-10103
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.3.0
Reporter: Steve Loughran
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10103.patch


 update commons-lang from 2.4 to 2.6



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10103) update commons-lang to 2.6

2013-11-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10103:
---

Description: update commons-lang from 2.5 to 2.6  (was: update commons-lang 
from 2.4 to 2.6)

 update commons-lang to 2.6
 --

 Key: HADOOP-10103
 URL: https://issues.apache.org/jira/browse/HADOOP-10103
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.3.0
Reporter: Steve Loughran
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10103.patch


 update commons-lang from 2.5 to 2.6



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10103) update commons-lang to 2.6

2013-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826444#comment-13826444
 ] 

Hadoop QA commented on HADOOP-10103:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614588/HADOOP-10103.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3298//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3298//console

This message is automatically generated.

 update commons-lang to 2.6
 --

 Key: HADOOP-10103
 URL: https://issues.apache.org/jira/browse/HADOOP-10103
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.3.0
Reporter: Steve Loughran
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10103.patch


 update commons-lang from 2.5 to 2.6



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10107) Server.getNumOpenConnections may throw NPE

2013-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826465#comment-13826465
 ] 

Hudson commented on HADOOP-10107:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #396 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/396/])
HADOOP-10107. Server.getNumOpenConnections may throw NPE. Contributed by Kihwal 
Lee. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1543335)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


 Server.getNumOpenConnections may throw NPE
 --

 Key: HADOOP-10107
 URL: https://issues.apache.org/jira/browse/HADOOP-10107
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Kihwal Lee
 Fix For: 2.3.0

 Attachments: HADOOP-10107.patch


 Found this in [build 
 #5440|https://builds.apache.org/job/PreCommit-HDFS-Build/5440/testReport/junit/org.apache.hadoop.hdfs.server.blockmanagement/TestUnderReplicatedBlocks/testSetrepIncWithUnderReplicatedBlocks/]
 Caused by: java.lang.NullPointerException
   at org.apache.hadoop.ipc.Server.getNumOpenConnections(Server.java:2434)
   at 
 org.apache.hadoop.ipc.metrics.RpcMetrics.numOpenConnections(RpcMetrics.java:74)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10110) hadoop-auth has a build break due to missing dependency

2013-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826468#comment-13826468
 ] 

Hudson commented on HADOOP-10110:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #396 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/396/])
HADOOP-10110. hadoop-auth has a build break due to missing dependency. 
(Contributed by Chuan Liu) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1543190)
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 hadoop-auth has a build break due to missing dependency
 ---

 Key: HADOOP-10110
 URL: https://issues.apache.org/jira/browse/HADOOP-10110
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Blocker
 Fix For: 3.0.0, 2.2.1

 Attachments: HADOOP-10110.patch


 We have a build break in hadoop-auth if build with maven cache cleaned. The 
 error looks like the follows. The problem exists on both Windows and Linux. 
 If you have old jetty jars in your maven cache, you won't see the error.
 {noformat}
 [INFO] 
 
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 1:29.469s
 [INFO] Finished at: Mon Nov 18 12:30:36 PST 2013
 [INFO] Final Memory: 37M/120M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
 (default-testCompile) on project hadoop-auth: Compilation failure: 
 Compilation failure:
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[84,13]
  cannot access org.mortbay.component.AbstractLifeCycle
 [ERROR] class file for org.mortbay.component.AbstractLifeCycle not found
 [ERROR] server = new Server(0);
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[94,29]
  cannot access org.mortbay.component.LifeCycle
 [ERROR] class file for org.mortbay.component.LifeCycle not found
 [ERROR] server.getConnectors()[0].setHost(host);
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[96,10]
  cannot find symbol
 [ERROR] symbol  : method start()
 [ERROR] location: class org.mortbay.jetty.Server
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[102,12]
  cannot find symbol
 [ERROR] symbol  : method stop()
 [ERROR] location: class org.mortbay.jetty.Server
 [ERROR] - [Help 1]
 [ERROR]
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR]
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
 [ERROR]
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-auth
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10110) hadoop-auth has a build break due to missing dependency

2013-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826496#comment-13826496
 ] 

Hudson commented on HADOOP-10110:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1613 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1613/])
HADOOP-10110. hadoop-auth has a build break due to missing dependency. 
(Contributed by Chuan Liu) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1543190)
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 hadoop-auth has a build break due to missing dependency
 ---

 Key: HADOOP-10110
 URL: https://issues.apache.org/jira/browse/HADOOP-10110
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Blocker
 Fix For: 3.0.0, 2.2.1

 Attachments: HADOOP-10110.patch


 We have a build break in hadoop-auth if build with maven cache cleaned. The 
 error looks like the follows. The problem exists on both Windows and Linux. 
 If you have old jetty jars in your maven cache, you won't see the error.
 {noformat}
 [INFO] 
 
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 1:29.469s
 [INFO] Finished at: Mon Nov 18 12:30:36 PST 2013
 [INFO] Final Memory: 37M/120M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
 (default-testCompile) on project hadoop-auth: Compilation failure: 
 Compilation failure:
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[84,13]
  cannot access org.mortbay.component.AbstractLifeCycle
 [ERROR] class file for org.mortbay.component.AbstractLifeCycle not found
 [ERROR] server = new Server(0);
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[94,29]
  cannot access org.mortbay.component.LifeCycle
 [ERROR] class file for org.mortbay.component.LifeCycle not found
 [ERROR] server.getConnectors()[0].setHost(host);
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[96,10]
  cannot find symbol
 [ERROR] symbol  : method start()
 [ERROR] location: class org.mortbay.jetty.Server
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[102,12]
  cannot find symbol
 [ERROR] symbol  : method stop()
 [ERROR] location: class org.mortbay.jetty.Server
 [ERROR] - [Help 1]
 [ERROR]
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR]
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
 [ERROR]
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-auth
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10107) Server.getNumOpenConnections may throw NPE

2013-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826493#comment-13826493
 ] 

Hudson commented on HADOOP-10107:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1613 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1613/])
HADOOP-10107. Server.getNumOpenConnections may throw NPE. Contributed by Kihwal 
Lee. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1543335)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


 Server.getNumOpenConnections may throw NPE
 --

 Key: HADOOP-10107
 URL: https://issues.apache.org/jira/browse/HADOOP-10107
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Kihwal Lee
 Fix For: 2.3.0

 Attachments: HADOOP-10107.patch


 Found this in [build 
 #5440|https://builds.apache.org/job/PreCommit-HDFS-Build/5440/testReport/junit/org.apache.hadoop.hdfs.server.blockmanagement/TestUnderReplicatedBlocks/testSetrepIncWithUnderReplicatedBlocks/]
 Caused by: java.lang.NullPointerException
   at org.apache.hadoop.ipc.Server.getNumOpenConnections(Server.java:2434)
   at 
 org.apache.hadoop.ipc.metrics.RpcMetrics.numOpenConnections(RpcMetrics.java:74)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10103) update commons-lang to 2.6

2013-11-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826500#comment-13826500
 ] 

Steve Loughran commented on HADOOP-10103:
-

Akira, 

# review the changes and see if there are any changes that require action
# run is a complete buld and test of Hadoop, including HDFS  YARN -Jenkins 
doesn't do that for a change to hadoop-common. 

 update commons-lang to 2.6
 --

 Key: HADOOP-10103
 URL: https://issues.apache.org/jira/browse/HADOOP-10103
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.3.0
Reporter: Steve Loughran
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10103.patch


 update commons-lang from 2.5 to 2.6



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10107) Server.getNumOpenConnections may throw NPE

2013-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826506#comment-13826506
 ] 

Hudson commented on HADOOP-10107:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1587 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1587/])
HADOOP-10107. Server.getNumOpenConnections may throw NPE. Contributed by Kihwal 
Lee. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1543335)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


 Server.getNumOpenConnections may throw NPE
 --

 Key: HADOOP-10107
 URL: https://issues.apache.org/jira/browse/HADOOP-10107
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Kihwal Lee
 Fix For: 2.3.0

 Attachments: HADOOP-10107.patch


 Found this in [build 
 #5440|https://builds.apache.org/job/PreCommit-HDFS-Build/5440/testReport/junit/org.apache.hadoop.hdfs.server.blockmanagement/TestUnderReplicatedBlocks/testSetrepIncWithUnderReplicatedBlocks/]
 Caused by: java.lang.NullPointerException
   at org.apache.hadoop.ipc.Server.getNumOpenConnections(Server.java:2434)
   at 
 org.apache.hadoop.ipc.metrics.RpcMetrics.numOpenConnections(RpcMetrics.java:74)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10110) hadoop-auth has a build break due to missing dependency

2013-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826509#comment-13826509
 ] 

Hudson commented on HADOOP-10110:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1587 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1587/])
HADOOP-10110. hadoop-auth has a build break due to missing dependency. 
(Contributed by Chuan Liu) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1543190)
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 hadoop-auth has a build break due to missing dependency
 ---

 Key: HADOOP-10110
 URL: https://issues.apache.org/jira/browse/HADOOP-10110
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Blocker
 Fix For: 3.0.0, 2.2.1

 Attachments: HADOOP-10110.patch


 We have a build break in hadoop-auth if build with maven cache cleaned. The 
 error looks like the follows. The problem exists on both Windows and Linux. 
 If you have old jetty jars in your maven cache, you won't see the error.
 {noformat}
 [INFO] 
 
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 1:29.469s
 [INFO] Finished at: Mon Nov 18 12:30:36 PST 2013
 [INFO] Final Memory: 37M/120M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
 (default-testCompile) on project hadoop-auth: Compilation failure: 
 Compilation failure:
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[84,13]
  cannot access org.mortbay.component.AbstractLifeCycle
 [ERROR] class file for org.mortbay.component.AbstractLifeCycle not found
 [ERROR] server = new Server(0);
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[94,29]
  cannot access org.mortbay.component.LifeCycle
 [ERROR] class file for org.mortbay.component.LifeCycle not found
 [ERROR] server.getConnectors()[0].setHost(host);
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[96,10]
  cannot find symbol
 [ERROR] symbol  : method start()
 [ERROR] location: class org.mortbay.jetty.Server
 [ERROR] 
 /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[102,12]
  cannot find symbol
 [ERROR] symbol  : method stop()
 [ERROR] location: class org.mortbay.jetty.Server
 [ERROR] - [Help 1]
 [ERROR]
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR]
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
 [ERROR]
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-auth
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: [jira] [Commented] (HADOOP-10110) hadoop-auth has a build break due to missing dependency

2013-11-19 Thread Christopher Swanson

On 11/19/13 5:39 AM, Hudson (JIRA) wrote:

 [ 
https://issues.apache.org/jira/browse/HADOOP-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826509#comment-13826509
 ]

Hudson commented on HADOOP-10110:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1587 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1587/])
HADOOP-10110. hadoop-auth has a build break due to missing dependency. (Contributed by 
Chuan Liu) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1543190)
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt



hadoop-auth has a build break due to missing dependency
---

 Key: HADOOP-10110
 URL: https://issues.apache.org/jira/browse/HADOOP-10110
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Blocker
 Fix For: 3.0.0, 2.2.1

 Attachments: HADOOP-10110.patch


We have a build break in hadoop-auth if build with maven cache cleaned. The 
error looks like the follows. The problem exists on both Windows and Linux. If 
you have old jetty jars in your maven cache, you won't see the error.
{noformat}
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:29.469s
[INFO] Finished at: Mon Nov 18 12:30:36 PST 2013
[INFO] Final Memory: 37M/120M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
(default-testCompile) on project hadoop-auth: Compilation failure: Compilation 
failure:
[ERROR] 
/home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[84,13]
 cannot access org.mortbay.component.AbstractLifeCycle
[ERROR] class file for org.mortbay.component.AbstractLifeCycle not found
[ERROR] server = new Server(0);
[ERROR] 
/home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[94,29]
 cannot access org.mortbay.component.LifeCycle
[ERROR] class file for org.mortbay.component.LifeCycle not found
[ERROR] server.getConnectors()[0].setHost(host);
[ERROR] 
/home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[96,10]
 cannot find symbol
[ERROR] symbol  : method start()
[ERROR] location: class org.mortbay.jetty.Server
[ERROR] 
/home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[102,12]
 cannot find symbol
[ERROR] symbol  : method stop()
[ERROR] location: class org.mortbay.jetty.Server
[ERROR] - [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-auth
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)

unsubscribe


[jira] [Updated] (HADOOP-10047) Add a directbuffer Decompressor API to hadoop

2013-11-19 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10047:
---

   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   2.3.0
   Status: Resolved  (was: Patch Available)

+1, lgtm.

I just committed this. Thanks [~gopalv]!

 Add a directbuffer Decompressor API to hadoop
 -

 Key: HADOOP-10047
 URL: https://issues.apache.org/jira/browse/HADOOP-10047
 Project: Hadoop Common
  Issue Type: New Feature
  Components: io
Affects Versions: 2.3.0
Reporter: Gopal V
Assignee: Gopal V
  Labels: compression
 Fix For: 2.3.0

 Attachments: DirectCompressor.html, DirectDecompressor.html, 
 HADOOP-10047-WIP.patch, HADOOP-10047-final.patch, 
 HADOOP-10047-redo-WIP.patch, HADOOP-10047-trunk.patch, 
 HADOOP-10047-with-tests.patch, decompress-benchmark.tgz


 With the Zero-Copy reads in HDFS (HDFS-5260), it becomes important to perform 
 all I/O operations without copying data into byte[] buffers or other buffers 
 which wrap over them.
 This is a proposal for adding a DirectDecompressor interface to the 
 io.compress, to indicate codecs which want to surface the direct buffer layer 
 upwards.
 The implementation should work with direct heap/mmap buffers and cannot 
 assume .array() availability.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10047) Add a directbuffer Decompressor API to hadoop

2013-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826619#comment-13826619
 ] 

Hudson commented on HADOOP-10047:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #4760 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4760/])
HADOOP-10047. Add a direct-buffer based apis for compression. Contributed by 
Gopal V. (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1543456)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DirectCompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DirectDecompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/ZlibCompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib/TestZlibCompressorDecompressor.java


 Add a directbuffer Decompressor API to hadoop
 -

 Key: HADOOP-10047
 URL: https://issues.apache.org/jira/browse/HADOOP-10047
 Project: Hadoop Common
  Issue Type: New Feature
  Components: io
Affects Versions: 2.3.0
Reporter: Gopal V
Assignee: Gopal V
  Labels: compression
 Fix For: 2.3.0

 Attachments: DirectCompressor.html, DirectDecompressor.html, 
 HADOOP-10047-WIP.patch, HADOOP-10047-final.patch, 
 HADOOP-10047-redo-WIP.patch, HADOOP-10047-trunk.patch, 
 HADOOP-10047-with-tests.patch, decompress-benchmark.tgz


 With the Zero-Copy reads in HDFS (HDFS-5260), it becomes important to perform 
 all I/O operations without copying data into byte[] buffers or other buffers 
 which wrap over them.
 This is a proposal for adding a DirectDecompressor interface to the 
 io.compress, to indicate codecs which want to surface the direct buffer layer 
 upwards.
 The implementation should work with direct heap/mmap buffers and cannot 
 assume .array() availability.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9622) bzip2 codec can drop records when reading data in splits

2013-11-19 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826706#comment-13826706
 ] 

Jason Lowe commented on HADOOP-9622:


Turns out there's already a followup for multibyte custom delimiters at 
HADOOP-9867, so I'll add the testcase and relevant details to that JIRA.

Thanks for the review, Chris.  Given your earlier +1 I think this is now ready 
to go as-is.  If there are no objections I'll commit this in the next few days.

 bzip2 codec can drop records when reading data in splits
 

 Key: HADOOP-9622
 URL: https://issues.apache.org/jira/browse/HADOOP-9622
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.4-alpha, 0.23.8
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9622-2.patch, HADOOP-9622-testcase.patch, 
 HADOOP-9622.patch, blockEndingInCR.txt.bz2, blockEndingInCRThenLF.txt.bz2


 Bzip2Codec.BZip2CompressionInputStream can cause records to be dropped when 
 reading them in splits based on where record delimiters occur relative to 
 compression block boundaries.
 Thanks to [~knoguchi] for discovering this problem while working on PIG-3251.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9867) org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record delimiters well

2013-11-19 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826729#comment-13826729
 ] 

Jason Lowe commented on HADOOP-9867:


Ran across this JIRA while discussing the intricacies of HADOOP-9622.  There's 
a relatively straightforward testcase that demonstrates the issue.  With the 
following plaintext input

{code:title=customdeliminput.txt}
abcxxx
defxxx
ghixxx
jklxxx
mnoxxx
pqrxxx
stuxxx
vw xxx
xyzxxx
{code}

run a wordcount job like this:

{noformat}
hadoop jar $HADOOP_PREFIX/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar 
wordcount -Dmapreduce.input.fileinputformat.split.maxsize=33 
-Dtextinputformat.record.delimiter=xxx customdeliminput.txt wcout
{noformat}

and we can see that one of the records was dropped due to incorrect split 
processing:

{noformat}
$ hadoop fs -cat wcout/part-r-0   
abc 1
def 1
ghi 1
jkl 1
mno 1
stu 1
vw  1
xyz 1
{noformat}

I don't think rewinding the seek position by the delimiter length is correct in 
all cases.  I believe that will lead to duplicate records rather than dropped 
records (e.g.: split ends exactly when a delimiter ends, and both splits end up 
processing the record after that delimiter).

Instead we can get correct behavior by treating any split in the middle of a 
multibyte custom delimiter as if the delimiter ended exactly at the end of the 
split, i.e.: the consumer of the prior split is responsible for processing the 
divided delimiter and the subsequent record.  The consumer of the next split 
then tosses the first record up to the first full delimiter as usual (i.e.: 
including the partial delimiter at the beginning of the split) and proceeds to 
process any subsequent records.  That way we don't get any dropped records or 
duplicate records.

I think one way of accomplishing this is to have the LineReader for multibyte 
custom delimiters report the current position as the end of the record data 
*without* the delimiter bytes.  Then any record that ends exactly at the end of 
the split or whose delimiter straddles the split boundary will cause the prior 
split to consume the extra record necessary.

 org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record 
 delimiters well
 --

 Key: HADOOP-9867
 URL: https://issues.apache.org/jira/browse/HADOOP-9867
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.20.2
 Environment: CDH3U2 Redhat linux 5.7
Reporter: Kris Geusebroek

 Having defined a recorddelimiter of multiple bytes in a new InputFileFormat 
 sometimes has the effect of skipping records from the input.
 This happens when the input splits are split off just after a 
 recordseparator. Starting point for the next split would be non zero and 
 skipFirstLine would be true. A seek into the file is done to start - 1 and 
 the text until the first recorddelimiter is ignored (due to the presumption 
 that this record is already handled by the previous maptask). Since the re 
 ord delimiter is multibyte the seek only got the last byte of the delimiter 
 into scope and its not recognized as a full delimiter. So the text is skipped 
 until the next delimiter (ignoring a full record!!)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10112) har file listing doesn't work with wild card

2013-11-19 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826781#comment-13826781
 ] 

Kousuke Saruta commented on HADOOP-10112:
-

Are you using branch-1 right?
I tried to reproduce using trunk but I couldn't and I could reproduce branch-1.
Glob code was changed between branch-1 and trunk (or branch-2).

 har file listing  doesn't work with wild card
 -

 Key: HADOOP-10112
 URL: https://issues.apache.org/jira/browse/HADOOP-10112
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Reporter: Brandon Li

 [test@test001 root]$ hdfs dfs -ls har:///tmp/filename.har/*
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 It works without *.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9867) org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record delimiters well

2013-11-19 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-9867:
---

 Priority: Critical  (was: Major)
 Target Version/s: 2.3.0
Affects Version/s: 0.23.9
   2.2.0

Raising severity since this involves loss of data.  Also I confirmed this is an 
issue on recent Hadoop versions as well.

 org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record 
 delimiters well
 --

 Key: HADOOP-9867
 URL: https://issues.apache.org/jira/browse/HADOOP-9867
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.20.2, 0.23.9, 2.2.0
 Environment: CDH3U2 Redhat linux 5.7
Reporter: Kris Geusebroek
Priority: Critical

 Having defined a recorddelimiter of multiple bytes in a new InputFileFormat 
 sometimes has the effect of skipping records from the input.
 This happens when the input splits are split off just after a 
 recordseparator. Starting point for the next split would be non zero and 
 skipFirstLine would be true. A seek into the file is done to start - 1 and 
 the text until the first recorddelimiter is ignored (due to the presumption 
 that this record is already handled by the previous maptask). Since the re 
 ord delimiter is multibyte the seek only got the last byte of the delimiter 
 into scope and its not recognized as a full delimiter. So the text is skipped 
 until the next delimiter (ignoring a full record!!)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10047) Add a directbuffer Decompressor API to hadoop

2013-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826893#comment-13826893
 ] 

Hudson commented on HADOOP-10047:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #4761 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4761/])
HADOOP-10047. Add a direct-buffer based apis for compression. Contributed by 
Gopal V. (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1543542)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DefaultCodec.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DirectDecompressionCodec.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DirectDecompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/ZlibFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/snappy/TestSnappyCompressorDecompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib/TestZlibCompressorDecompressor.java
Revert HADOOP-10047, wrong patch. (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1543538)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DirectCompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DirectDecompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/ZlibCompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib/TestZlibCompressorDecompressor.java


 Add a directbuffer Decompressor API to hadoop
 -

 Key: HADOOP-10047
 URL: https://issues.apache.org/jira/browse/HADOOP-10047
 Project: Hadoop Common
  Issue Type: New Feature
  Components: io
Affects Versions: 2.3.0
Reporter: Gopal V
Assignee: Gopal V
  Labels: compression
 Fix For: 2.3.0

 Attachments: DirectCompressor.html, DirectDecompressor.html, 
 HADOOP-10047-WIP.patch, HADOOP-10047-final.patch, 
 HADOOP-10047-redo-WIP.patch, HADOOP-10047-trunk.patch, 
 HADOOP-10047-with-tests.patch, decompress-benchmark.tgz


 With the Zero-Copy reads in HDFS (HDFS-5260), it becomes important to perform 
 all I/O operations without copying data into byte[] buffers or other buffers 
 which wrap over them.
 This is a proposal for adding a DirectDecompressor interface to the 
 io.compress, to indicate codecs which want to surface the direct buffer layer 
 upwards.
 The implementation should work with direct heap/mmap buffers and cannot 
 assume .array() availability.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10116) fix inconsistent synchronization warnings in ZlibCompressor

2013-11-19 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10116:
-

 Summary: fix inconsistent synchronization warnings in 
ZlibCompressor
 Key: HADOOP-10116
 URL: https://issues.apache.org/jira/browse/HADOOP-10116
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe


Fix findbugs warnings in ZlibCompressor.  I believe these were introduced by 
HADOOP-10047.

{code}
CodeWarning
IS  Inconsistent synchronization of 
org.apache.hadoop.io.compress.zlib.ZlibCompressor.keepUncompressedBuf; locked 
57% of time
IS  Inconsistent synchronization of 
org.apache.hadoop.io.compress.zlib.ZlibCompressor.userBuf; locked 60% of time
IS  Inconsistent synchronization of 
org.apache.hadoop.io.compress.zlib.ZlibCompressor.userBufLen; locked 85% of time
IS  Inconsistent synchronization of 
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.userBuf; locked 60% of time
IS  Inconsistent synchronization of 
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.userBufLen; locked 77% of 
time
Dodgy Warnings

CodeWarning
DLS Dead store to pos2 in 
org.apache.hadoop.io.compress.zlib.ZlibCompressor.put(ByteBuffer, ByteBuffer)
DLS Dead store to pos2 in 
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.put(ByteBuffer, ByteBuffer)
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10106) Incorrect thread name in RPC log messages

2013-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826961#comment-13826961
 ] 

Hadoop QA commented on HADOOP-10106:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12614181/hadoop_10106_trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3299//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3299//console

This message is automatically generated.

 Incorrect thread name in RPC log messages
 -

 Key: HADOOP-10106
 URL: https://issues.apache.org/jira/browse/HADOOP-10106
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ming Ma
Priority: Minor
 Attachments: hadoop_10106_trunk.patch


 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: 
 readAndProcess from client 10.115.201.46 threw exception 
 org.apache.hadoop.ipc.RpcServerException: Unknown out of band call 
 #-2147483647
 This is thrown by a reader thread, so the message should be like
 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8020: 
 readAndProcess from client 10.115.201.46 threw exception 
 org.apache.hadoop.ipc.RpcServerException: Unknown out of band call 
 #-2147483647
 Another example is Responder.processResponse, which can also be called by 
 handler thread. When that happend, the thread name should be the handler 
 thread, not the responder thread.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10117) Unable to compile source code from stable 2.2.0 release

2013-11-19 Thread Prasad Ramalingam (JIRA)
Prasad Ramalingam created HADOOP-10117:
--

 Summary: Unable to compile source code from stable 2.2.0 release
 Key: HADOOP-10117
 URL: https://issues.apache.org/jira/browse/HADOOP-10117
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
 Environment: windows 7
Reporter: Prasad Ramalingam


I am trying to compile the source code but I am getting the following error.

[ERROR] D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
va:[86,13] cannot access org.mortbay.component.AbstractLifeCycle
class file for org.mortbay.component.AbstractLifeCycle not found
server = new Server(0);
[ERROR] D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
va:[96,29] cannot access org.mortbay.component.LifeCycle
class file for org.mortbay.component.LifeCycle not found
server.getConnectors()[0].setHost(host);
[ERROR] D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
va:[98,10] cannot find symbol
symbol  : method start()
location: class org.mortbay.jetty.Server
[ERROR] D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
va:[104,12] cannot find symbol
symbol  : method stop()
location: class org.mortbay.jetty.Server


Looks like the build is broken.

Please fix and let me know as to when I can download the stable version.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-10025) Replace HttpConfig#getSchemePrefix with implicit scheme in YARN/MR

2013-11-19 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi resolved HADOOP-10025.


Resolution: Won't Fix

 Replace HttpConfig#getSchemePrefix with implicit scheme in YARN/MR
 --

 Key: HADOOP-10025
 URL: https://issues.apache.org/jira/browse/HADOOP-10025
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Omkar Vinit Joshi
 Attachments: HADOOP-10025.000.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10025) Replace HttpConfig#getSchemePrefix with implicit scheme in YARN/MR

2013-11-19 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826983#comment-13826983
 ] 

Omkar Vinit Joshi commented on HADOOP-10025:


closing this as this is no longer valid.

 Replace HttpConfig#getSchemePrefix with implicit scheme in YARN/MR
 --

 Key: HADOOP-10025
 URL: https://issues.apache.org/jira/browse/HADOOP-10025
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Omkar Vinit Joshi
 Attachments: HADOOP-10025.000.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10118) CommandFormat never parse --

2013-11-19 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HADOOP-10118:
---

 Summary: CommandFormat never parse --
 Key: HADOOP-10118
 URL: https://issues.apache.org/jira/browse/HADOOP-10118
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0
Reporter: Kousuke Saruta


We cannot use -- option to skip args following that.
CommandFormat#parse is implemented as follows.

{code}
public void parse(ListString args) {
...
  } else if (arg.equals(--)) { // force end of option processing
args.remove(pos);
break;
  }
...
{code}

But, FsShell is called through ToolRunner and ToolRunner use 
GenericOptionParser. GenericOptionParser use GnuParser, which discard -- when 
parsing args.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10117) Unable to compile source code from stable 2.2.0 release

2013-11-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827009#comment-13827009
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-10117:
-

Just have tried.  I have no problem to compile it in my Mac.  Look like that 
this is Windows specific.

 Unable to compile source code from stable 2.2.0 release
 ---

 Key: HADOOP-10117
 URL: https://issues.apache.org/jira/browse/HADOOP-10117
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
 Environment: windows 7
Reporter: Prasad Ramalingam

 I am trying to compile the source code but I am getting the following error.
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[86,13] cannot access org.mortbay.component.AbstractLifeCycle
 class file for org.mortbay.component.AbstractLifeCycle not found
 server = new Server(0);
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[96,29] cannot access org.mortbay.component.LifeCycle
 class file for org.mortbay.component.LifeCycle not found
 server.getConnectors()[0].setHost(host);
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[98,10] cannot find symbol
 symbol  : method start()
 location: class org.mortbay.jetty.Server
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[104,12] cannot find symbol
 symbol  : method stop()
 location: class org.mortbay.jetty.Server
 Looks like the build is broken.
 Please fix and let me know as to when I can download the stable version.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-10117) Unable to compile source code from stable 2.2.0 release

2013-11-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-10117.


Resolution: Duplicate

This looks like a duplicate of HADOOP-10110.  A fix was committed for that 
yesterday, so if you pick up the most recent version of the code, then I expect 
the problem will go away.

 Unable to compile source code from stable 2.2.0 release
 ---

 Key: HADOOP-10117
 URL: https://issues.apache.org/jira/browse/HADOOP-10117
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
 Environment: windows 7
Reporter: Prasad Ramalingam

 I am trying to compile the source code but I am getting the following error.
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[86,13] cannot access org.mortbay.component.AbstractLifeCycle
 class file for org.mortbay.component.AbstractLifeCycle not found
 server = new Server(0);
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[96,29] cannot access org.mortbay.component.LifeCycle
 class file for org.mortbay.component.LifeCycle not found
 server.getConnectors()[0].setHost(host);
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[98,10] cannot find symbol
 symbol  : method start()
 location: class org.mortbay.jetty.Server
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[104,12] cannot find symbol
 symbol  : method stop()
 location: class org.mortbay.jetty.Server
 Looks like the build is broken.
 Please fix and let me know as to when I can download the stable version.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10117) Unable to compile source code from stable 2.2.0 release

2013-11-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827107#comment-13827107
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-10117:
-

 This looks like a duplicate of HADOOP-10110. ...

From the description, the source dir was D:\hadoop-src\hadoop-2.2.0-src\, so 
the source was probably from v2.2.0 release but not trunk.

 Unable to compile source code from stable 2.2.0 release
 ---

 Key: HADOOP-10117
 URL: https://issues.apache.org/jira/browse/HADOOP-10117
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
 Environment: windows 7
Reporter: Prasad Ramalingam

 I am trying to compile the source code but I am getting the following error.
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[86,13] cannot access org.mortbay.component.AbstractLifeCycle
 class file for org.mortbay.component.AbstractLifeCycle not found
 server = new Server(0);
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[96,29] cannot access org.mortbay.component.LifeCycle
 class file for org.mortbay.component.LifeCycle not found
 server.getConnectors()[0].setHost(host);
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[98,10] cannot find symbol
 symbol  : method start()
 location: class org.mortbay.jetty.Server
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[104,12] cannot find symbol
 symbol  : method stop()
 location: class org.mortbay.jetty.Server
 Looks like the build is broken.
 Please fix and let me know as to when I can download the stable version.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10117) Unable to compile source code from stable 2.2.0 release

2013-11-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827131#comment-13827131
 ] 

Chris Nauroth commented on HADOOP-10117:


HADOOP-10110 was also applicable to 2.2.0.  Patches were committed to trunk, 
branch-2, and branch-2.2.

 Unable to compile source code from stable 2.2.0 release
 ---

 Key: HADOOP-10117
 URL: https://issues.apache.org/jira/browse/HADOOP-10117
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
 Environment: windows 7
Reporter: Prasad Ramalingam

 I am trying to compile the source code but I am getting the following error.
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[86,13] cannot access org.mortbay.component.AbstractLifeCycle
 class file for org.mortbay.component.AbstractLifeCycle not found
 server = new Server(0);
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[96,29] cannot access org.mortbay.component.LifeCycle
 class file for org.mortbay.component.LifeCycle not found
 server.getConnectors()[0].setHost(host);
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[98,10] cannot find symbol
 symbol  : method start()
 location: class org.mortbay.jetty.Server
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[104,12] cannot find symbol
 symbol  : method stop()
 location: class org.mortbay.jetty.Server
 Looks like the build is broken.
 Please fix and let me know as to when I can download the stable version.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10117) Unable to compile source code from stable 2.2.0 release

2013-11-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827140#comment-13827140
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-10117:
-

I see.  Thanks.

 Unable to compile source code from stable 2.2.0 release
 ---

 Key: HADOOP-10117
 URL: https://issues.apache.org/jira/browse/HADOOP-10117
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
 Environment: windows 7
Reporter: Prasad Ramalingam

 I am trying to compile the source code but I am getting the following error.
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[86,13] cannot access org.mortbay.component.AbstractLifeCycle
 class file for org.mortbay.component.AbstractLifeCycle not found
 server = new Server(0);
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[96,29] cannot access org.mortbay.component.LifeCycle
 class file for org.mortbay.component.LifeCycle not found
 server.getConnectors()[0].setHost(host);
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[98,10] cannot find symbol
 symbol  : method start()
 location: class org.mortbay.jetty.Server
 [ERROR] 
 D:\hadoop-src\hadoop-2.2.0-src\hadoop-common-project\hadoop-auth\src\tes
 t\java\org\apache\hadoop\security\authentication\client\AuthenticatorTestCase.ja
 va:[104,12] cannot find symbol
 symbol  : method stop()
 location: class org.mortbay.jetty.Server
 Looks like the build is broken.
 Please fix and let me know as to when I can download the stable version.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10112) har file listing doesn't work with wild card

2013-11-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-10112:


Affects Version/s: 2.2.1

 har file listing  doesn't work with wild card
 -

 Key: HADOOP-10112
 URL: https://issues.apache.org/jira/browse/HADOOP-10112
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.2.1
Reporter: Brandon Li

 [test@test001 root]$ hdfs dfs -ls har:///tmp/filename.har/*
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 It works without *.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-11-19 Thread Jayesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayesh updated HADOOP-9870:
---

Attachment: HADOOP-9870.patch

Thanks for reviewing Plamen,

I have updated patch that addresses JAVA_HEAP_MAX in hadoop shell script and 
also includes windows .cmd file update.

Please review and vote if it looks good.

Thanks
Jayesh

 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
 Attachments: HADOOP-9870.patch, HADOOP-9870.patch


 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10112) har file listing doesn't work with wild card

2013-11-19 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827136#comment-13827136
 ] 

Brandon Li commented on HADOOP-10112:
-

Trunk works fine. I was using branch 2.2. Let me update the JIRA accordingly.


 har file listing  doesn't work with wild card
 -

 Key: HADOOP-10112
 URL: https://issues.apache.org/jira/browse/HADOOP-10112
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.2.1
Reporter: Brandon Li

 [test@test001 root]$ hdfs dfs -ls har:///tmp/filename.har/*
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 It works without *.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827181#comment-13827181
 ] 

Hadoop QA commented on HADOOP-9870:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614739/HADOOP-9870.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3300//console

This message is automatically generated.

 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
 Attachments: HADOOP-9870.patch, HADOOP-9870.patch


 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10103) update commons-lang to 2.6

2013-11-19 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827260#comment-13827260
 ] 

Akira AJISAKA commented on HADOOP-10103:


Thanks! 

bq. review the changes and see if there are any changes that require action

Lang 2.6 is binary compatible release with Lang 2.5.
http://commons.apache.org/proper/commons-lang/release-notes/RELEASE-NOTES-2.6.txt
There is no need to action.

bq. run is a complete buld and test of Hadoop, including HDFS  YARN -Jenkins 
doesn't do that for a change to hadoop-common.

I built with the patch. I'll run complete test.

 update commons-lang to 2.6
 --

 Key: HADOOP-10103
 URL: https://issues.apache.org/jira/browse/HADOOP-10103
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.3.0
Reporter: Steve Loughran
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10103.patch


 update commons-lang from 2.5 to 2.6



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-11-19 Thread Jayesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayesh updated HADOOP-9870:
---

Attachment: HADOOP-9870.patch

 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
 Attachments: HADOOP-9870.patch, HADOOP-9870.patch, HADOOP-9870.patch


 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827301#comment-13827301
 ] 

Hadoop QA commented on HADOOP-9870:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614776/HADOOP-9870.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3301//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3301//console

This message is automatically generated.

 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
 Attachments: HADOOP-9870.patch, HADOOP-9870.patch, HADOOP-9870.patch


 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-11-19 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827338#comment-13827338
 ] 

Vinay commented on HADOOP-9870:
---

AFAIK, java process, processes its commandline VM args sequentially. If the 
same VM argument is set multiple times,it will choose the last one as its value.
Even though hadoop have many -Xmx configurations, it will take the last one in 
the list. 
User need not confuse about the other JVM argument (-Xmx1000m) as that will 
help if user didnot configure anything ( in case HADOOP_CONF_DIR is different 
and it dont have hadoop-env.sh file).

So I am not seeing any issue here.

 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
 Attachments: HADOOP-9870.patch, HADOOP-9870.patch, HADOOP-9870.patch


 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10119) Document hadoop archive -p option

2013-11-19 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-10119:
--

 Summary: Document hadoop archive -p option
 Key: HADOOP-10119
 URL: https://issues.apache.org/jira/browse/HADOOP-10119
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Priority: Minor


Now hadoop archive -p (relative parent path) option is required but the option 
is not documented.
See 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#archive
 .



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-11-19 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827354#comment-13827354
 ] 

Wei Yan commented on HADOOP-9870:
-

[~vinayrpet]. As I said above, I haven't found any documents that said the jvm 
would pick the last one. Correct me if I'm wrong here. So this patch wants to 
make this clear, instead of letting jvm make the decision to choose which as 
the jvm setting.

 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
 Attachments: HADOOP-9870.patch, HADOOP-9870.patch, HADOOP-9870.patch


 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-3733) s3: URLs break when Secret Key contains a slash, even if encoded

2013-11-19 Thread Charles Menguy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827360#comment-13827360
 ] 

Charles Menguy commented on HADOOP-3733:


I can confirm that I have been hitting this issue too, and other people at my 
company also hit it.
It would be great to see this patch in an upcoming release.
Thanks !

 s3: URLs break when Secret Key contains a slash, even if encoded
 --

 Key: HADOOP-3733
 URL: https://issues.apache.org/jira/browse/HADOOP-3733
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 0.17.1, 2.0.2-alpha
Reporter: Stuart Sierra
Priority: Minor
 Attachments: HADOOP-3733-20130223T011025Z.patch, HADOOP-3733.patch, 
 hadoop-3733.patch


 When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
 distcp fails if the SECRET contains a slash, even when the slash is 
 URL-encoded as %2F.
 Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
 And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
 And your bucket is called mybucket
 You can URL-encode the Secret KKey as 
 Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
 But this doesn't work:
 {noformat}
 $ bin/hadoop distcp file:///source  
 s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
 08/07/09 15:05:22 INFO util.CopyFiles: 
 destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
 mybucket
 org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
 ResponseCode=403, ResponseMessage=Forbidden
 at 
 org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
 ...
 With failures, global counters are inaccurate; consider running with -i
 Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
 org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
 ?xml version=1.0 
 encoding=UTF-8?ErrorCodeSignatureDoesNotMatch/CodeMessageThe 
 request signature we calculated does not match the signature you provided. 
 Check your key and signing method./Message
 at 
 org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
 ...
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-11-19 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827363#comment-13827363
 ] 

Vinay commented on HADOOP-9870:
---

bq. I haven't found any documents that said the jvm would pick the last one
Yes you are right. I too dint find any explicit document in hadoop mentioned 
about that. But we tested it and found that later argument value only it will 
use. And we are using in our clusters by configuring higher value than default 
of 1000m. 
User specied opts are added at last of the command line list just before the 
classname just make sure that their parameters take effect

 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
 Attachments: HADOOP-9870.patch, HADOOP-9870.patch, HADOOP-9870.patch


 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.



--
This message was sent by Atlassian JIRA
(v6.1#6144)