[jira] [Commented] (HADOOP-9471) hadoop-client wrongfully excludes jetty-util JAR, breaking webhdfs
[ https://issues.apache.org/jira/browse/HADOOP-9471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629984#comment-13629984 ] Hudson commented on HADOOP-9471: Integrated in Hadoop-Yarn-trunk #181 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/181/]) HADOOP-9471. Merged into 2.0.4-alpha. Fixing CHANGES.txt (Revision 1467139) Result = SUCCESS vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467139 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt hadoop-client wrongfully excludes jetty-util JAR, breaking webhdfs -- Key: HADOOP-9471 URL: https://issues.apache.org/jira/browse/HADOOP-9471 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 2.0.3-alpha Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 2.0.4-alpha Attachments: HADOOP-9471.patch WebHdfsFileSystem uses jetty-util's JSON class. hadoop-client excludes that JAR, applications built using hadoop-client POM fail: {code}java.lang.NoClassDefFoundError: org/mortbay/util/ajax/JSON at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.jsonParse(WebHdfsFileSystem.java:277) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.getResponse(WebHdfsFileSystem.java:561) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.run(WebHdfsFileSystem.java:480) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:413) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:580) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:591) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1332) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9233) Cover package org.apache.hadoop.io.compress.zlib with unit tests
[ https://issues.apache.org/jira/browse/HADOOP-9233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629986#comment-13629986 ] Hudson commented on HADOOP-9233: Integrated in Hadoop-Yarn-trunk #181 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/181/]) HADOOP-9233. Cover package org.apache.hadoop.io.compress.zlib with unit tests. Contributed by Vadim Bondarev (Revision 1467090) Result = SUCCESS jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467090 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/CompressDecompressTester.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressorDecompressor.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib/TestZlibCompressorDecompressor.java Cover package org.apache.hadoop.io.compress.zlib with unit tests Key: HADOOP-9233 URL: https://issues.apache.org/jira/browse/HADOOP-9233 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Vadim Bondarev Assignee: Vadim Bondarev Fix For: 2.0.5-beta, 0.23.8 Attachments: HADOOP-9233-branch-0.23-b.patch, HADOOP-9233-branch-2-a.patch, HADOOP-9233-branch-2-b.patch, HADOOP-9233-trunk-a.patch, HADOOP-9233-trunk-b.patch, HADOOP-9233-trunk-c.patch, HADOOP-9233-trunk-d.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9222) Cover package with org.apache.hadoop.io.lz4 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629988#comment-13629988 ] Hudson commented on HADOOP-9222: Integrated in Hadoop-Yarn-trunk #181 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/181/]) HADOOP-9222. Cover package with org.apache.hadoop.io.lz4 unit tests. Contributed by Vadim Bondarev (Revision 1467072) Result = SUCCESS jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467072 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/lz4 * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/lz4/TestLz4CompressorDecompressor.java Cover package with org.apache.hadoop.io.lz4 unit tests -- Key: HADOOP-9222 URL: https://issues.apache.org/jira/browse/HADOOP-9222 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Vadim Bondarev Assignee: Vadim Bondarev Fix For: 2.0.5-beta, 0.23.8 Attachments: HADOOP-9222-branch-0.23-a.patch, HADOOP-9222-branch-0.23-b.patch, HADOOP-9222-branch-2-a.patch, HADOOP-9222-branch-2-b.patch, HADOOP-9222-trunk-a.patch, HADOOP-9222-trunk-b.patch, HADOOP-9222-trunk-c.patch Add test class TestLz4CompressorDecompressor with method for Lz4Compressor, Lz4Decompressor testing -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9233) Cover package org.apache.hadoop.io.compress.zlib with unit tests
[ https://issues.apache.org/jira/browse/HADOOP-9233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630018#comment-13630018 ] Hudson commented on HADOOP-9233: Integrated in Hadoop-Hdfs-0.23-Build #579 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/579/]) svn merge -c 1467090 FIXES: HADOOP-9233. Cover package org.apache.hadoop.io.compress.zlib with unit tests. Contributed by Vadim Bondarev (Revision 1467102) Result = SUCCESS jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467102 Files : * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/CompressDecompressTester.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressorDecompressor.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib/TestZlibCompressorDecompressor.java Cover package org.apache.hadoop.io.compress.zlib with unit tests Key: HADOOP-9233 URL: https://issues.apache.org/jira/browse/HADOOP-9233 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Vadim Bondarev Assignee: Vadim Bondarev Fix For: 2.0.5-beta, 0.23.8 Attachments: HADOOP-9233-branch-0.23-b.patch, HADOOP-9233-branch-2-a.patch, HADOOP-9233-branch-2-b.patch, HADOOP-9233-trunk-a.patch, HADOOP-9233-trunk-b.patch, HADOOP-9233-trunk-c.patch, HADOOP-9233-trunk-d.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9222) Cover package with org.apache.hadoop.io.lz4 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630019#comment-13630019 ] Hudson commented on HADOOP-9222: Integrated in Hadoop-Hdfs-0.23-Build #579 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/579/]) svn merge -c 1467072 FIXES: HADOOP-9222. Cover package with org.apache.hadoop.io.lz4 unit tests. Contributed by Vadim Bondarev (Revision 1467080) Result = SUCCESS jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467080 Files : * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/lz4 * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/lz4/TestLz4CompressorDecompressor.java Cover package with org.apache.hadoop.io.lz4 unit tests -- Key: HADOOP-9222 URL: https://issues.apache.org/jira/browse/HADOOP-9222 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Vadim Bondarev Assignee: Vadim Bondarev Fix For: 2.0.5-beta, 0.23.8 Attachments: HADOOP-9222-branch-0.23-a.patch, HADOOP-9222-branch-0.23-b.patch, HADOOP-9222-branch-2-a.patch, HADOOP-9222-branch-2-b.patch, HADOOP-9222-trunk-a.patch, HADOOP-9222-trunk-b.patch, HADOOP-9222-trunk-c.patch Add test class TestLz4CompressorDecompressor with method for Lz4Compressor, Lz4Decompressor testing -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9471) hadoop-client wrongfully excludes jetty-util JAR, breaking webhdfs
[ https://issues.apache.org/jira/browse/HADOOP-9471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630037#comment-13630037 ] Hudson commented on HADOOP-9471: Integrated in Hadoop-Hdfs-trunk #1370 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1370/]) HADOOP-9471. Merged into 2.0.4-alpha. Fixing CHANGES.txt (Revision 1467139) Result = FAILURE vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467139 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt hadoop-client wrongfully excludes jetty-util JAR, breaking webhdfs -- Key: HADOOP-9471 URL: https://issues.apache.org/jira/browse/HADOOP-9471 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 2.0.3-alpha Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 2.0.4-alpha Attachments: HADOOP-9471.patch WebHdfsFileSystem uses jetty-util's JSON class. hadoop-client excludes that JAR, applications built using hadoop-client POM fail: {code}java.lang.NoClassDefFoundError: org/mortbay/util/ajax/JSON at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.jsonParse(WebHdfsFileSystem.java:277) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.getResponse(WebHdfsFileSystem.java:561) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.run(WebHdfsFileSystem.java:480) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:413) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:580) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:591) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1332) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9233) Cover package org.apache.hadoop.io.compress.zlib with unit tests
[ https://issues.apache.org/jira/browse/HADOOP-9233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630039#comment-13630039 ] Hudson commented on HADOOP-9233: Integrated in Hadoop-Hdfs-trunk #1370 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1370/]) HADOOP-9233. Cover package org.apache.hadoop.io.compress.zlib with unit tests. Contributed by Vadim Bondarev (Revision 1467090) Result = FAILURE jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467090 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/CompressDecompressTester.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressorDecompressor.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib/TestZlibCompressorDecompressor.java Cover package org.apache.hadoop.io.compress.zlib with unit tests Key: HADOOP-9233 URL: https://issues.apache.org/jira/browse/HADOOP-9233 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Vadim Bondarev Assignee: Vadim Bondarev Fix For: 2.0.5-beta, 0.23.8 Attachments: HADOOP-9233-branch-0.23-b.patch, HADOOP-9233-branch-2-a.patch, HADOOP-9233-branch-2-b.patch, HADOOP-9233-trunk-a.patch, HADOOP-9233-trunk-b.patch, HADOOP-9233-trunk-c.patch, HADOOP-9233-trunk-d.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9222) Cover package with org.apache.hadoop.io.lz4 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630041#comment-13630041 ] Hudson commented on HADOOP-9222: Integrated in Hadoop-Hdfs-trunk #1370 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1370/]) HADOOP-9222. Cover package with org.apache.hadoop.io.lz4 unit tests. Contributed by Vadim Bondarev (Revision 1467072) Result = FAILURE jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467072 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/lz4 * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/lz4/TestLz4CompressorDecompressor.java Cover package with org.apache.hadoop.io.lz4 unit tests -- Key: HADOOP-9222 URL: https://issues.apache.org/jira/browse/HADOOP-9222 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Vadim Bondarev Assignee: Vadim Bondarev Fix For: 2.0.5-beta, 0.23.8 Attachments: HADOOP-9222-branch-0.23-a.patch, HADOOP-9222-branch-0.23-b.patch, HADOOP-9222-branch-2-a.patch, HADOOP-9222-branch-2-b.patch, HADOOP-9222-trunk-a.patch, HADOOP-9222-trunk-b.patch, HADOOP-9222-trunk-c.patch Add test class TestLz4CompressorDecompressor with method for Lz4Compressor, Lz4Decompressor testing -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9471) hadoop-client wrongfully excludes jetty-util JAR, breaking webhdfs
[ https://issues.apache.org/jira/browse/HADOOP-9471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630076#comment-13630076 ] Hudson commented on HADOOP-9471: Integrated in Hadoop-Mapreduce-trunk #1397 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1397/]) HADOOP-9471. Merged into 2.0.4-alpha. Fixing CHANGES.txt (Revision 1467139) Result = SUCCESS vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467139 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt hadoop-client wrongfully excludes jetty-util JAR, breaking webhdfs -- Key: HADOOP-9471 URL: https://issues.apache.org/jira/browse/HADOOP-9471 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 2.0.3-alpha Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 2.0.4-alpha Attachments: HADOOP-9471.patch WebHdfsFileSystem uses jetty-util's JSON class. hadoop-client excludes that JAR, applications built using hadoop-client POM fail: {code}java.lang.NoClassDefFoundError: org/mortbay/util/ajax/JSON at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.jsonParse(WebHdfsFileSystem.java:277) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.getResponse(WebHdfsFileSystem.java:561) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.run(WebHdfsFileSystem.java:480) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:413) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:580) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:591) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1332) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9233) Cover package org.apache.hadoop.io.compress.zlib with unit tests
[ https://issues.apache.org/jira/browse/HADOOP-9233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630078#comment-13630078 ] Hudson commented on HADOOP-9233: Integrated in Hadoop-Mapreduce-trunk #1397 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1397/]) HADOOP-9233. Cover package org.apache.hadoop.io.compress.zlib with unit tests. Contributed by Vadim Bondarev (Revision 1467090) Result = SUCCESS jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467090 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/CompressDecompressTester.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressorDecompressor.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib/TestZlibCompressorDecompressor.java Cover package org.apache.hadoop.io.compress.zlib with unit tests Key: HADOOP-9233 URL: https://issues.apache.org/jira/browse/HADOOP-9233 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Vadim Bondarev Assignee: Vadim Bondarev Fix For: 2.0.5-beta, 0.23.8 Attachments: HADOOP-9233-branch-0.23-b.patch, HADOOP-9233-branch-2-a.patch, HADOOP-9233-branch-2-b.patch, HADOOP-9233-trunk-a.patch, HADOOP-9233-trunk-b.patch, HADOOP-9233-trunk-c.patch, HADOOP-9233-trunk-d.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9222) Cover package with org.apache.hadoop.io.lz4 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630080#comment-13630080 ] Hudson commented on HADOOP-9222: Integrated in Hadoop-Mapreduce-trunk #1397 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1397/]) HADOOP-9222. Cover package with org.apache.hadoop.io.lz4 unit tests. Contributed by Vadim Bondarev (Revision 1467072) Result = SUCCESS jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467072 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/lz4 * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/lz4/TestLz4CompressorDecompressor.java Cover package with org.apache.hadoop.io.lz4 unit tests -- Key: HADOOP-9222 URL: https://issues.apache.org/jira/browse/HADOOP-9222 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Vadim Bondarev Assignee: Vadim Bondarev Fix For: 2.0.5-beta, 0.23.8 Attachments: HADOOP-9222-branch-0.23-a.patch, HADOOP-9222-branch-0.23-b.patch, HADOOP-9222-branch-2-a.patch, HADOOP-9222-branch-2-b.patch, HADOOP-9222-trunk-a.patch, HADOOP-9222-trunk-b.patch, HADOOP-9222-trunk-c.patch Add test class TestLz4CompressorDecompressor with method for Lz4Compressor, Lz4Decompressor testing -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9472) Cleanup hadoop-config.cmd
[ https://issues.apache.org/jira/browse/HADOOP-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630104#comment-13630104 ] Arpit Agarwal commented on HADOOP-9472: --- +1 Cleanup hadoop-config.cmd - Key: HADOOP-9472 URL: https://issues.apache.org/jira/browse/HADOOP-9472 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic Priority: Minor Attachments: HADOOP-9472.branch-1-win.cleanup.patch Some portions of hadoop-config.cmd script are unused and should be cleaned up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9473) typo in FileUtil copy() method
Glen Mazza created HADOOP-9473: -- Summary: typo in FileUtil copy() method Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Priority: Trivial typo: Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Glen Mazza updated HADOOP-9473: --- Description: typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} was: typo: Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Priority: Trivial typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9474) fs -put command doesn't work if I selecting certain files from a local folder
Glen Mazza created HADOOP-9474: -- Summary: fs -put command doesn't work if I selecting certain files from a local folder Key: HADOOP-9474 URL: https://issues.apache.org/jira/browse/HADOOP-9474 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza The following four commands (a) - (d) were run sequentially. From (a) - (c) HDFS folder inputABC does not yet exist. (a) and (b) are improperly refusing to put the files from conf/*.xml into inputABC because folder inputABC doesn't yet exist. However, in (c) when I make the same request except with just conf (and not conf/*.xml) HDFS will correctly create inputABC and copy the folders over. We see that inputABC now exists in (d) when I subsequently try to copy the conf/*.xml folders, it complains that its files already exist there. IOW, I can put conf into a nonexisting HDFS folder and fs will create the folder for me, but I can't do the same with conf/*.xml -- but the latter should work equally as well. The problem appears to be in org.apache.hadoop.fs.FileUtil, line 176, which properly routes conf to have its files copied but will have conf/*.xml subsequently return a nonexisting folder error. {noformat} a) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: `inputABC': specified destination directory doest not exist b) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: `inputABC': specified destination directory doest not exist c) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf inputABC d) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: Target inputABC/capacity-scheduler.xml already exists Target inputABC/core-site.xml already exists Target inputABC/fair-scheduler.xml already exists Target inputABC/hadoop-policy.xml already exists Target inputABC/hdfs-site.xml already exists Target inputABC/mapred-queue-acls.xml already exists Target inputABC/mapred-site.xml already exists {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9474) fs -put command doesn't work if I selecting certain files from a local folder
[ https://issues.apache.org/jira/browse/HADOOP-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630276#comment-13630276 ] Daryn Sharp commented on HADOOP-9474: - Someone should consider back-porting the post 1.x FsShell. It fixes virtually all of the issues that are being reported in 1.x. I would expect the new FsShell to practically be a drop-in replacement, although it's consistent behavior and posix compliance will introduce some incompatibilities. fs -put command doesn't work if I selecting certain files from a local folder - Key: HADOOP-9474 URL: https://issues.apache.org/jira/browse/HADOOP-9474 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza The following four commands (a) - (d) were run sequentially. From (a) - (c) HDFS folder inputABC does not yet exist. (a) and (b) are improperly refusing to put the files from conf/*.xml into inputABC because folder inputABC doesn't yet exist. However, in (c) when I make the same request except with just conf (and not conf/*.xml) HDFS will correctly create inputABC and copy the folders over. We see that inputABC now exists in (d) when I subsequently try to copy the conf/*.xml folders, it complains that its files already exist there. IOW, I can put conf into a nonexisting HDFS folder and fs will create the folder for me, but I can't do the same with conf/*.xml -- but the latter should work equally as well. The problem appears to be in org.apache.hadoop.fs.FileUtil, line 176, which properly routes conf to have its files copied but will have conf/*.xml subsequently return a nonexisting folder error. {noformat} a) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: `inputABC': specified destination directory doest not exist b) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: `inputABC': specified destination directory doest not exist c) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf inputABC d) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: Target inputABC/capacity-scheduler.xml already exists Target inputABC/core-site.xml already exists Target inputABC/fair-scheduler.xml already exists Target inputABC/hadoop-policy.xml already exists Target inputABC/hdfs-site.xml already exists Target inputABC/mapred-queue-acls.xml already exists Target inputABC/mapred-site.xml already exists {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630274#comment-13630274 ] Suresh Srinivas commented on HADOOP-9473: - Glen, would you like to work on this? If you post a patch, I will commit it. I have added you as a contributor and have assigned the jira to you. typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Priority: Trivial typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-9473: Assignee: Glen Mazza typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Assignee: Glen Mazza Priority: Trivial typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630303#comment-13630303 ] Glen Mazza commented on HADOOP-9473: Suresh, I just posted the patch, it's in the description. typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Assignee: Glen Mazza Priority: Trivial typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9474) fs -put command doesn't work if selecting certain files from a local folder
[ https://issues.apache.org/jira/browse/HADOOP-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Glen Mazza updated HADOOP-9474: --- Description: The following four commands (a) - (d) were run sequentially. From (a) - (c) HDFS folder inputABC does not yet exist. (a) and (b) are improperly refusing to put the files from conf/*.xml into inputABC because folder inputABC doesn't yet exist. However, in (c) when I make the same request except with just conf (and not conf/*.xml) HDFS will correctly create inputABC and copy the folders over. We see that inputABC now exists in (d) when I subsequently try to copy the conf/*.xml folders, it correctly complains that the files already exist there. IOW, I can put conf into a nonexisting HDFS folder and fs will create the folder for me, but I can't do the same with conf/*.xml -- but the latter should work equally as well. The problem appears to be in org.apache.hadoop.fs.FileUtil, line 176, which properly routes conf to have its files copied but will have conf/*.xml subsequently return a nonexisting folder error. {noformat} a) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: `inputABC': specified destination directory doest not exist b) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: `inputABC': specified destination directory doest not exist c) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf inputABC d) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: Target inputABC/capacity-scheduler.xml already exists Target inputABC/core-site.xml already exists Target inputABC/fair-scheduler.xml already exists Target inputABC/hadoop-policy.xml already exists Target inputABC/hdfs-site.xml already exists Target inputABC/mapred-queue-acls.xml already exists Target inputABC/mapred-site.xml already exists {noformat} was: The following four commands (a) - (d) were run sequentially. From (a) - (c) HDFS folder inputABC does not yet exist. (a) and (b) are improperly refusing to put the files from conf/*.xml into inputABC because folder inputABC doesn't yet exist. However, in (c) when I make the same request except with just conf (and not conf/*.xml) HDFS will correctly create inputABC and copy the folders over. We see that inputABC now exists in (d) when I subsequently try to copy the conf/*.xml folders, it complains that its files already exist there. IOW, I can put conf into a nonexisting HDFS folder and fs will create the folder for me, but I can't do the same with conf/*.xml -- but the latter should work equally as well. The problem appears to be in org.apache.hadoop.fs.FileUtil, line 176, which properly routes conf to have its files copied but will have conf/*.xml subsequently return a nonexisting folder error. {noformat} a) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: `inputABC': specified destination directory doest not exist b) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: `inputABC': specified destination directory doest not exist c) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf inputABC d) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml inputABC put: Target inputABC/capacity-scheduler.xml already exists Target inputABC/core-site.xml already exists Target inputABC/fair-scheduler.xml already exists Target inputABC/hadoop-policy.xml already exists Target inputABC/hdfs-site.xml already exists Target inputABC/mapred-queue-acls.xml already exists Target inputABC/mapred-site.xml already exists {noformat} Summary: fs -put command doesn't work if selecting certain files from a local folder (was: fs -put command doesn't work if I selecting certain files from a local folder) fs -put command doesn't work if selecting certain files from a local folder --- Key: HADOOP-9474 URL: https://issues.apache.org/jira/browse/HADOOP-9474 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza The following four commands (a) - (d) were run sequentially. From (a) - (c) HDFS folder inputABC does not yet exist. (a) and (b) are improperly refusing to put the files from conf/*.xml into inputABC because folder inputABC doesn't yet exist. However, in (c) when I make the same request except with just conf (and not conf/*.xml) HDFS will correctly create inputABC and copy the folders over. We see that inputABC now exists in (d) when I subsequently try to copy the conf/*.xml folders, it correctly complains that the files already exist there. IOW, I can put conf into a nonexisting HDFS folder
[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630353#comment-13630353 ] Suresh Srinivas commented on HADOOP-9473: - Glen, please use a patch file to post the patch, instead of description. I am posting the patch from the description, this time. Is this problem in trunk? typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Assignee: Glen Mazza Priority: Trivial typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-9473: Attachment: HADOOP-9473.patch typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Assignee: Glen Mazza Priority: Trivial Attachments: HADOOP-9473.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630359#comment-13630359 ] Glen Mazza commented on HADOOP-9473: I don't know and don't care if the problem is in trunk, HADOOP-9206 makes trunk useless for me, forcing me to have to stay with the 1.1.x branch. typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Assignee: Glen Mazza Priority: Trivial Attachments: HADOOP-9473.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630367#comment-13630367 ] Suresh Srinivas commented on HADOOP-9473: - bq. I don't know and don't care if the problem is in trunk The changes in Hadoop always first go to trunk and then to older releases to keep the trunk. Hence the question. I will see if I can talk to the folks who can help with HADOOP-9206 and see some progress is made on that. typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Assignee: Glen Mazza Priority: Trivial Attachments: HADOOP-9473.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9206) Setting up a Single Node Cluster instructions need improvement in 0.23.5/2.0.2-alpha branches
[ https://issues.apache.org/jira/browse/HADOOP-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Glen Mazza updated HADOOP-9206: --- Description: Hi, in contrast to the easy-to-follow 1.0.4 instructions (http://hadoop.apache.org/docs/r1.0.4/single_node_setup.html) the 0.23.5 and 2.0.2-alpha instructions (http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/SingleCluster.html) need more clarification -- it seems to be written for people who already know and understand hadoop. In particular, these points need clarification: 1.) Text: You should be able to obtain the MapReduce tarball from the release. Question: What is the MapReduce tarball? What is its name? I don't see such an object within the hadoop-0.23.5.tar.gz download. 2.) Quote: NOTE: You will need protoc installed of version 2.4.1 or greater. Protoc doesn't have a website you can link to (it's just mentioned offhand when you Google it) -- is it really the case today that Hadoop has a dependency on such a minor project? At any rate, if you can have a link of where one goes to get/install Protoc that would be good. 3.) Quote: Assuming you have installed hadoop-common/hadoop-hdfs and exported $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME, untar hadoop mapreduce tarball and set environment variable $HADOOP_MAPRED_HOME to the untarred directory. I'm not sure what you mean by the forward slashes: hadoop-common/hadoop-hdfs and $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME -- do you mean (install both) or *or* just install one of the two? This needs clarification--please remove the forward slash and replace it with what you're trying to say. The audience here is complete newbie and they've been brought to this page from here: http://hadoop.apache.org/docs/r0.23.5/ (same with r2.0.2-alpha/) (quote: Getting Started - The Hadoop documentation includes the information you need to get started using Hadoop. Begin with the Single Node Setup which shows you how to set up a single-node Hadoop installation.), they've downloaded hadoop-0.23.5.tar.gz and want to know what to do next. Why are there potentially two applications -- hadoop-common and hadoop-hdfs and not just one? (The download doesn't appear to have two separate apps) -- if there is indeed just one app can we remove the other from the above text to avoid confusion? Again, I just downloaded hadoop-0.23.5.tar.gz -- do I need to download more? If so, let us know in the docs here. Also, the fragment: Assuming you have installed hadoop-common/hadoop-hdfs... No, I haven't, that's what *this* page is supposed to explain to me how to do -- how do I install these two (or just one of these two)? Also, what do I set $HADOOP_COMMON_HOME and/or $HADOOP_HDFS_HOME to? 4.) Quote: NOTE: The following instructions assume you have hdfs running. No, I don't--how do I do this? Again, this page is supposed to teach me that. 5.) Quote: To start the ResourceManager and NodeManager, you will have to update the configs. Assuming your $HADOOP_CONF_DIR is the configuration directory... Could you clarify here what the configuration directory is, it doesn't exist in the 0.23.5 download. I just see bin,etc,include,lib,libexec,sbin,share folders but no conf one.) 6.) Quote: Assuming that the environment variables $HADOOP_COMMON_HOME, $HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $YARN_HOME, $JAVA_HOME and $HADOOP_CONF_DIR have been set appropriately. We'll need to know what to set YARN_HOME to here. Thanks! Glen was: Hi, in contrast to the easy-to-follow 1.0.4 instructions (http://hadoop.apache.org/docs/r1.0.4/single_node_setup.html) the 0.23.5 and 2.0.2-alpha instructions (http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/SingleCluster.html) need more clarification -- it seems to be written for people who already know and understand hadoop. In particular, these points need clarification: 1.) Text: You should be able to obtain the MapReduce tarball from the release. Question: What is the MapReduce tarball? What is its name? I don't see such an object within the hadoop-0.23.5.tar.gz download. 2.) Quote: NOTE: You will need protoc installed of version 2.4.1 or greater. Protoc doesn't have a website you can link to (it's just mentioned offhand when you Google it) -- is it really the case today that Hadoop has a dependency on such a minor project? At any rate, if you can have a link of where one goes to get/install Protoc that would be good. 3.) Quote: Assuming you have installed hadoop-common/hadoop-hdfs and exported $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME, untar hadoop mapreduce tarball and set environment variable $HADOOP_MAPRED_HOME to the untarred directory. I'm not sure what you mean by the forward slashes: hadoop-common/hadoop-hdfs and $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME -- do you mean (install both) or *or* just install one of the two? This needs
[jira] [Updated] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-9473: Attachment: (was: HADOOP-9473.patch) typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Assignee: Glen Mazza Priority: Trivial typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-9473: Attachment: HADOOP-9473.branch-1.patch typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Assignee: Glen Mazza Priority: Trivial Attachments: HADOOP-9373.patch, HADOOP-9473.branch-1.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-9473: Attachment: HADOOP-9373.patch Attaching the trunk patch. typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Assignee: Glen Mazza Priority: Trivial Attachments: HADOOP-9373.patch, HADOOP-9473.branch-1.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-9473: Assignee: Suresh Srinivas (was: Glen Mazza) typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Assignee: Suresh Srinivas Priority: Trivial Attachments: HADOOP-9373.patch, HADOOP-9473.branch-1.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Work started] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-9473 started by Suresh Srinivas. typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2 Reporter: Glen Mazza Assignee: Suresh Srinivas Priority: Trivial Attachments: HADOOP-9373.patch, HADOOP-9473.branch-1.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-9473: Affects Version/s: 2.0.0-alpha Status: Patch Available (was: In Progress) typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.1.2, 2.0.0-alpha Reporter: Glen Mazza Assignee: Suresh Srinivas Priority: Trivial Attachments: HADOOP-9373.patch, HADOOP-9473.branch-1.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630406#comment-13630406 ] Hadoop QA commented on HADOOP-9473: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12578463/HADOOP-9373.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2446//console This message is automatically generated. typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.0-alpha, 1.1.2 Reporter: Glen Mazza Assignee: Suresh Srinivas Priority: Trivial Attachments: HADOOP-9373.patch, HADOOP-9473.branch-1.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-9473: Attachment: (was: HADOOP-9373.patch) typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.0-alpha, 1.1.2 Reporter: Glen Mazza Assignee: Suresh Srinivas Priority: Trivial Attachments: HADOOP-9473.branch-1.patch, HADOOP-9473.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-9473: Attachment: HADOOP-9473.patch typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.0-alpha, 1.1.2 Reporter: Glen Mazza Assignee: Suresh Srinivas Priority: Trivial Attachments: HADOOP-9473.branch-1.patch, HADOOP-9473.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9211) HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE
[ https://issues.apache.org/jira/browse/HADOOP-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630415#comment-13630415 ] Hudson commented on HADOOP-9211: Integrated in Hadoop-trunk-Commit #3609 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3609/]) HADOOP-9211. Set default max heap size in HADOOP_CLIENT_OPTS to 512m in order to avoid OOME. Contributed by Plamen Jeliazkov. (Revision 1467380) Result = SUCCESS shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467380 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE -- Key: HADOOP-9211 URL: https://issues.apache.org/jira/browse/HADOOP-9211 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 2.0.2-alpha Reporter: Sarah Weissman Assignee: Plamen Jeliazkov Attachments: HADOOP-9211.patch, hadoop-xmx.patch Original Estimate: 1m Remaining Estimate: 1m hadoop-env.sh as included in the 2.0.2alpha release tarball contains: export HADOOP_CLIENT_OPTS=-Xmx128m $HADOOP_CLIENT_OPTS This overrides any heap settings in HADOOP_HEAPSIZE. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630416#comment-13630416 ] Glen Mazza commented on HADOOP-9473: Yes, I'll be quite happy to switch to the 2.0.x branch once HADOOP-9206 is fixed, and will attach patches from now on. Thanks! typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.0-alpha, 1.1.2 Reporter: Glen Mazza Assignee: Suresh Srinivas Priority: Trivial Attachments: HADOOP-9473.branch-1.patch, HADOOP-9473.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9211) HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE
[ https://issues.apache.org/jira/browse/HADOOP-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HADOOP-9211: Resolution: Fixed Fix Version/s: 2.0.5-beta Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I just committed this to trunk and branch-2. Thank you Plamen. HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE -- Key: HADOOP-9211 URL: https://issues.apache.org/jira/browse/HADOOP-9211 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 2.0.2-alpha Reporter: Sarah Weissman Assignee: Plamen Jeliazkov Fix For: 2.0.5-beta Attachments: HADOOP-9211.patch, hadoop-xmx.patch Original Estimate: 1m Remaining Estimate: 1m hadoop-env.sh as included in the 2.0.2alpha release tarball contains: export HADOOP_CLIENT_OPTS=-Xmx128m $HADOOP_CLIENT_OPTS This overrides any heap settings in HADOOP_HEAPSIZE. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9469) mapreduce/yarn source jars not included in dist tarball
[ https://issues.apache.org/jira/browse/HADOOP-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Parker updated HADOOP-9469: -- Attachment: HADOOP-9469.patch mapreduce/yarn source jars not included in dist tarball --- Key: HADOOP-9469 URL: https://issues.apache.org/jira/browse/HADOOP-9469 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.0 Reporter: Thomas Graves Assignee: Robert Parker Attachments: HADOOP-9469.patch, HADOOP-9469.patch the mapreduce and yarn sources jars don't get included into the distribution tarball. It seems they get built by default just aren't assembled. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9469) mapreduce/yarn source jars not included in dist tarball
[ https://issues.apache.org/jira/browse/HADOOP-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630443#comment-13630443 ] Robert Parker commented on HADOOP-9469: --- Added sources for hadoop-tools and corrected mapreduce sources that were omitted. mapreduce/yarn source jars not included in dist tarball --- Key: HADOOP-9469 URL: https://issues.apache.org/jira/browse/HADOOP-9469 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.0 Reporter: Thomas Graves Assignee: Robert Parker Attachments: HADOOP-9469.patch, HADOOP-9469.patch the mapreduce and yarn sources jars don't get included into the distribution tarball. It seems they get built by default just aren't assembled. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9469) mapreduce/yarn source jars not included in dist tarball
[ https://issues.apache.org/jira/browse/HADOOP-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630465#comment-13630465 ] Hadoop QA commented on HADOOP-9469: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12578475/HADOOP-9469.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-assemblies. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2448//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2448//console This message is automatically generated. mapreduce/yarn source jars not included in dist tarball --- Key: HADOOP-9469 URL: https://issues.apache.org/jira/browse/HADOOP-9469 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.0 Reporter: Thomas Graves Assignee: Robert Parker Attachments: HADOOP-9469.patch, HADOOP-9469.patch the mapreduce and yarn sources jars don't get included into the distribution tarball. It seems they get built by default just aren't assembled. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630470#comment-13630470 ] Hadoop QA commented on HADOOP-9473: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12578469/HADOOP-9473.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2447//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2447//console This message is automatically generated. typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.0-alpha, 1.1.2 Reporter: Glen Mazza Assignee: Suresh Srinivas Priority: Trivial Attachments: HADOOP-9473.branch-1.patch, HADOOP-9473.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method
[ https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630482#comment-13630482 ] Siddharth Seth commented on HADOOP-9473: Looks good. +1. typo in FileUtil copy() method -- Key: HADOOP-9473 URL: https://issues.apache.org/jira/browse/HADOOP-9473 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.0-alpha, 1.1.2 Reporter: Glen Mazza Assignee: Suresh Srinivas Priority: Trivial Attachments: HADOOP-9473.branch-1.patch, HADOOP-9473.patch typo: {code} Index: src/core/org/apache/hadoop/fs/FileUtil.java === --- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295) +++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy) @@ -178,7 +178,7 @@ // Check if dest is directory if (!dstFS.exists(dst)) { throw new IOException(` + dst +': specified destination directory + -doest not exist); +does not exist); } else { FileStatus sdst = dstFS.getFileStatus(dst); if (!sdst.isDir()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9043) winutils can create unusable symlinks
[ https://issues.apache.org/jira/browse/HADOOP-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630513#comment-13630513 ] Chris Nauroth commented on HADOOP-9043: --- I was just trying to resume progress on this to fix the remaining test failures related to Windows symlinks on the local file system. I discovered a new problem. The trunk code still has a Unix command dependency on the readlink command for determining the target of a symlink. (See {{RawLocalFs#readLink}}.) This is the source of some of the test failures. I propose the addition of a new winutils readlink command. The logic of this command would be: # Call {{CreateFile}} with {{OPEN_EXISTING}} and {{FILE_FLAG_OPEN_REPARSE_POINT}}. ** http://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx # Call {{GetFileInformationByHandle}}. ** http://msdn.microsoft.com/en-us/library/windows/desktop/aa364952(v=vs.85).aspx # Check the {{BY_HANDLE_FILE_INFORMATION}} for the presence of {{FILE_ATTRIBUTE_REPARSE_POINT}}. ** http://msdn.microsoft.com/en-us/library/windows/desktop/aa363788(v=vs.85).aspx ** http://msdn.microsoft.com/en-us/library/windows/desktop/gg258117(v=vs.85).aspx # If not a reparse point, exit with code 1 and print nothing to stdout. (This is what Unix readlink does.) # Call {{DeviceIoControl}} with {{FSCTL_GET_REPARSE_POINT}}. ** http://msdn.microsoft.com/en-us/library/aa363216(v=VS.85).aspx ** http://msdn.microsoft.com/en-us/library/aa364571.aspx # Get the {{REPARSE_DATA_BUFFER}} structure. ** http://msdn.microsoft.com/en-us/library/ff552012.aspx # Check if {{ReparseTag}} is {{IO_REPARSE_TAG_SYMLINK}}. ** http://msdn.microsoft.com/en-us/library/windows/desktop/aa365511(v=vs.85).aspx # If not {{IO_REPARSE_TAG_SYMLINK}}, then...? # Get target from {{SymbolicLinkReparseBuffer}}. # Print target to stdout and exit with code 0. (This is what Unix readlink does.) Could someone with more Windows expertise review this and comment on whether or not the logic looks correct? There are a few edge cases that I'm not sure how to handle. What should we do if the reparse point is not a symlink (i.e. junction point)? MSDN also mentions that some reparse points may have a different data structure associated with them, a {{REPARSE_GUID_DATA_BUFFER}}, and I'm not sure what special handling is required around that. winutils can create unusable symlinks - Key: HADOOP-9043 URL: https://issues.apache.org/jira/browse/HADOOP-9043 Project: Hadoop Common Issue Type: Bug Components: util Affects Versions: 3.0.0, 1-win Reporter: Chris Nauroth Assignee: Arpit Agarwal Fix For: 3.0.0, 1-win Attachments: HADOOP-9043.branch-1.2.patch, HADOOP-9043.branch-1-win.patch, HADOOP-9043.trunk.2.patch, HADOOP-9043.trunk.patch In general, the winutils symlink command rejects attempts to create symlinks targeting a destination file that does not exist. However, if given a symlink destination with forward slashes pointing at a file that does exist, then it creates the symlink with the forward slashes, and then attempts to open the file through the symlink will fail. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9043) winutils can create unusable symlinks
[ https://issues.apache.org/jira/browse/HADOOP-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-9043: -- Target Version/s: 3.0.0, 1-win (was: 1-win, trunk-win) Status: Open (was: Patch Available) winutils can create unusable symlinks - Key: HADOOP-9043 URL: https://issues.apache.org/jira/browse/HADOOP-9043 Project: Hadoop Common Issue Type: Bug Components: util Affects Versions: 3.0.0, 1-win Reporter: Chris Nauroth Assignee: Arpit Agarwal Fix For: 3.0.0, 1-win Attachments: HADOOP-9043.branch-1.2.patch, HADOOP-9043.branch-1-win.patch, HADOOP-9043.trunk.2.patch, HADOOP-9043.trunk.patch In general, the winutils symlink command rejects attempts to create symlinks targeting a destination file that does not exist. However, if given a symlink destination with forward slashes pointing at a file that does exist, then it creates the symlink with the forward slashes, and then attempts to open the file through the symlink will fail. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9043) winutils can create unusable symlinks
[ https://issues.apache.org/jira/browse/HADOOP-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-9043: -- Attachment: HADOOP-9043.trunk.3.patch I'm attaching a new trunk patch that gets us a little closer to the end goal. I clicked Cancel Patch, because there are still a lot of things in flux on this jira, and the patch isn't final. I also didn't provide a branch-1-win version of the patch, because I think it will be easier to do a mass backport after finalizing the trunk patch. The changes since Arpit's prior patch are: # Use {{assumeTrue(!WINDOWS)}} to skip tests related to dangling symlinks, which don't work on Windows with local file system. # Change symlink.c so that the new validation check for forward slashes also prints an error message. # Change {{RawLocalFs#getPathWithoutSchemeAndAuthority}} to pass through {{java.io.File}} before attempting symlink creation. The symlink API operates on {{Path}} objects, which are inherently forward-slashed regardless of OS. This meant that the new validation check was rejecting the calls (as we want). Passing through {{File}} converts to backslash on Windows. This is the same approach that we took in the YARN nodemanager, which heavily relies on symlinks during container launch to reference localized resources. winutils can create unusable symlinks - Key: HADOOP-9043 URL: https://issues.apache.org/jira/browse/HADOOP-9043 Project: Hadoop Common Issue Type: Bug Components: util Affects Versions: 3.0.0, 1-win Reporter: Chris Nauroth Assignee: Arpit Agarwal Fix For: 3.0.0, 1-win Attachments: HADOOP-9043.branch-1.2.patch, HADOOP-9043.branch-1-win.patch, HADOOP-9043.trunk.2.patch, HADOOP-9043.trunk.3.patch, HADOOP-9043.trunk.patch In general, the winutils symlink command rejects attempts to create symlinks targeting a destination file that does not exist. However, if given a symlink destination with forward slashes pointing at a file that does exist, then it creates the symlink with the forward slashes, and then attempts to open the file through the symlink will fail. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Add full length to SASL response to allow non-blocking readers
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630548#comment-13630548 ] Daryn Sharp commented on HADOOP-9421: - The main problem we have is clients cannot work with a heterogenous mix of secure non-secure servers. The current protocol exchange is: Upon connection, the client sends a connection header which includes the auth method it intends to use, and then immediately the client follows with an initial SASL response. If the server supports the auth method, the client and server exchange SASL tokens until either side's SASL client or server decides negotiation is complete, else the server sends switch to simple. This is a very rigid implementation. It was designed with an assumption the SASL mechanism is kerberos or nothing (simple). The design needs to be extended to support multiple SASL mechanisms that the client and server can mutually agree upon. Some of the current problems are: # The client is dictating the auth method to the server. The server cannot tell the client its supported mechanisms. It can only reject the client or switch to simple auth. # If the client's SASL client doesn't support an initial SASL response, none is sent and the server hangs waiting for the client. # The client server don't handle non-sasl exceptions during a sasl exchange (last I knew anyway). # The client and server don't have a definitive way to know that authentication is complete. Each side assumes auth is done if it's SASL object thinks it's done. I had to hack a positive acknowledgement into the server for the SASL PLAIN mechanism. What I'd propose at a high-level is the following. Note I didn't fill in all the cracks for brevity: # Client sends a connection header (auth method is irrelevant) # Server responds with (sasl-challenge, mechanism, protocol, token) # Client attempts to instantiate a SaslClient based on the mechanism+protocol #- Supported: (sasl-continue, evaluated-token) #-- (sasl-continue, token) exchange continues until server responds: #--# (sasl-auth-ok) #--# (sasl-error, message) #- Not supported: (sasl-next) #-- Server responds: #--# More supported auth methods, goto #2 #--# No remaining auth methods: (sasl-error, message) The benefits are: # The server is now in full control of directing authentication # Client may work in a heterogenous environment of diverse auth methods # SASL api's support for multiple mechanisms is leveraged # SASL mechanism support now becomes pluggable and extensible # Multiple auth methods may share the same SASL mechanism, ex. DIGEST-MD5, via SASL api's protocol field # Simple auth is replaced by SASL PLAIN # The IPC SASL implementation becomes dramatically simpler I've already attempted something very similar in the past, but discarded it because it was completely incompatible. If this is a reasonable design, I can take a POC stab at it. Add full length to SASL response to allow non-blocking readers -- Key: HADOOP-9421 URL: https://issues.apache.org/jira/browse/HADOOP-9421 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 2.0.3-alpha Reporter: Sanjay Radia Assignee: Junping Du Attachments: HADOOP-9421.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Add full length to SASL response to allow non-blocking readers
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630558#comment-13630558 ] Daryn Sharp commented on HADOOP-9421: - Examples of mechanism/protocol tuples would be: * ( GSSAPI, krb5 ) * ( DIGEST-MD5, hadoop-token } * ( DIGEST-MD5, ldap ) * ( PLAIN, simple } * ( PLAIN, password } - could maybe prompt user Add full length to SASL response to allow non-blocking readers -- Key: HADOOP-9421 URL: https://issues.apache.org/jira/browse/HADOOP-9421 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 2.0.3-alpha Reporter: Sanjay Radia Assignee: Junping Du Attachments: HADOOP-9421.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Add full length to SASL response to allow non-blocking readers
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630570#comment-13630570 ] Daryn Sharp commented on HADOOP-9421: - I'd also like to include my idea (I think it's in another jira somewhere...) of the server TELLING the client the service of the token it wants, instead of the client trying to guess the token beforehand. The server should provide an opaque id and the client just tries to look it up. This would remove the growing mess associated with clients juggling hostnames vs. IP addrs vs. HA logical names. Decoupling the lookup of a token from its issuing host or IP, and using a server specified identifier removes current limitations on being able to support multiple NICs (client gets token over public iface 1, nodes can't use token via internal iface 2), support tokens acquired through NAT, and sharable tokens between HA NNs w/o any custom client-side logic. Add full length to SASL response to allow non-blocking readers -- Key: HADOOP-9421 URL: https://issues.apache.org/jira/browse/HADOOP-9421 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 2.0.3-alpha Reporter: Sanjay Radia Assignee: Junping Du Attachments: HADOOP-9421.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-2) Reused Keys and Values fail with a Combiner
[ https://issues.apache.org/jira/browse/HADOOP-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630584#comment-13630584 ] Hudson commented on HADOOP-2: - Integrated in Accumulo-1.5-Hadoop-2.0 #75 (See [https://builds.apache.org/job/Accumulo-1.5-Hadoop-2.0/75/]) ACCUMULO-804: start-dfs.sh is in a different place in hadoop-2 (Revision 1467401) Result = UNSTABLE Reused Keys and Values fail with a Combiner --- Key: HADOOP-2 URL: https://issues.apache.org/jira/browse/HADOOP-2 Project: Hadoop Common Issue Type: Bug Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 0.1.0 Attachments: clone-map-output.patch If the map function reuses the key or value by destructively modifying it after the output.collect(key,value) call and your application uses a combiner, the data is corrupted by having lots of instances with the last key or value. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-2) Reused Keys and Values fail with a Combiner
[ https://issues.apache.org/jira/browse/HADOOP-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630602#comment-13630602 ] Hudson commented on HADOOP-2: - Integrated in Accumulo-1.5 #76 (See [https://builds.apache.org/job/Accumulo-1.5/76/]) ACCUMULO-804: start-dfs.sh is in a different place in hadoop-2 (Revision 1467401) Result = SUCCESS Reused Keys and Values fail with a Combiner --- Key: HADOOP-2 URL: https://issues.apache.org/jira/browse/HADOOP-2 Project: Hadoop Common Issue Type: Bug Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 0.1.0 Attachments: clone-map-output.patch If the map function reuses the key or value by destructively modifying it after the output.collect(key,value) call and your application uses a combiner, the data is corrupted by having lots of instances with the last key or value. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9455) HADOOP_CLIENT_OPTS is appended twice, causing JVM failures
[ https://issues.apache.org/jira/browse/HADOOP-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13630730#comment-13630730 ] Eli Collins commented on HADOOP-9455: - Makes sense, thanks Chris. HADOOP_CLIENT_OPTS is appended twice, causing JVM failures -- Key: HADOOP-9455 URL: https://issues.apache.org/jira/browse/HADOOP-9455 Project: Hadoop Common Issue Type: Bug Components: bin Affects Versions: 3.0.0, 2.0.3-alpha Reporter: Sangjin Lee Assignee: Chris Nauroth Priority: Minor Attachments: HADOOP-9455.1.patch If you set HADOOP_CLIENT_OPTS and run hadoop, you'll find that the HADOOP_CLIENT_OPTS value gets appended twice, and leads to JVM start failures for cases like adding debug flags. For example, {noformat} HADOOP_CLIENT_OPTS='-agentlib:jdwp=transport=dt_socket,address=localhost:9009,server=y,suspend=y' hadoop jar anything ERROR: Cannot load this JVM TI agent twice, check your java command line for duplicate jdwp options. Error occurred during initialization of VM agent library failed to init: jdwp {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9450) HADOOP_USER_CLASSPATH_FIRST is not honored; CLASSPATH is PREpended instead of APpended
[ https://issues.apache.org/jira/browse/HADOOP-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J updated HADOOP-9450: Attachment: HADOOP-9450.patch Mitch is correct about all of this. The intention was to place first or regularly last, depending on the safety override. Relying on the position of where it is done in code today is not maintainable nor reliable in the long term (an example is trunk now, its got entries coming in before and after this manipulation). Note that the new Windows scripts already sorta do what Mitch points out rather than emulate the existing implementation. FWIW, Pig also does the same thing via PIG-3261. I've attached a trunk applicable patch from what Mitch has posted in his comments here. If this approach looks good to everyone, I can also supply a branch-1 (and branch-2/0.23, if doesn't apply from trunk) compat patch to backport. I cannot test the Windows changes as I lack an environment presently but I did manually inspect the CLASSPATH via a bash -x ./hadoop to see the precedence switching doing its work properly. HADOOP_USER_CLASSPATH_FIRST is not honored; CLASSPATH is PREpended instead of APpended -- Key: HADOOP-9450 URL: https://issues.apache.org/jira/browse/HADOOP-9450 Project: Hadoop Common Issue Type: Bug Reporter: Mitch Wyle Attachments: HADOOP-9450.patch On line 133 of the hadoop shell wrapper, CLASSPATH is set as: CLASSPATH=${CLASSPATH}:${HADOOP_CLASSPATH} Notice that the built-up CLASSPATH, along with all the libs and unwanted JARS are pre-pended BEFORE the user's HADOOP_CLASSPATH. Therefore there is no way to put your own JARs in front of those that the hadoop wrapper script sets. We propose a patch that reverses this order. Failing that, we would like to add a command line option to override this behavior and enable a user's JARs to be found before the wrong ones in the Hadoop library paths. We always welcome your opinions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira