[jira] [Updated] (HDFS-7492) If multiple threads call FsVolumeList#checkDirs at the same time, we should only do checkDirs once and give the results to all waiting threads
[ https://issues.apache.org/jira/browse/HDFS-7492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-7492: -- Assignee: (was: Kiran Kumar M R) If multiple threads call FsVolumeList#checkDirs at the same time, we should only do checkDirs once and give the results to all waiting threads -- Key: HDFS-7492 URL: https://issues.apache.org/jira/browse/HDFS-7492 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Reporter: Colin Patrick McCabe Priority: Minor checkDirs is called when we encounter certain I/O errors. It's rare to get just a single I/O error... normally you start getting many errors when a disk is going bad. For this reason, we shouldn't start a new checkDirs scan for each error. Instead, if multiple threads call FsVolumeList#checkDirs at around the same time, we should only do checkDirs once and give the results to all the waiting threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7899) Improve EOF error message
[ https://issues.apache.org/jira/browse/HDFS-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-7899: -- Assignee: (was: Kiran Kumar M R) Improve EOF error message - Key: HDFS-7899 URL: https://issues.apache.org/jira/browse/HDFS-7899 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.6.0 Reporter: Harsh J Priority: Minor Currently, a DN disconnection for reasons other than connection timeout or refused messages, such as an EOF message as a result of rejection or other network fault, reports in this manner: {code} WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /x.x.x.x: for block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no length prefix available java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:392) at org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:137) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1103) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:538) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:750) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:794) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:602) {code} This is not very clear to a user (warn's at the hdfs-client). It could likely be improved with a more diagnosable message, or at least the direct reason than an EOF. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8312) Trash does not descent into child directories to check for permissions
[ https://issues.apache.org/jira/browse/HDFS-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-8312: -- Assignee: (was: Kiran Kumar M R) Trash does not descent into child directories to check for permissions -- Key: HDFS-8312 URL: https://issues.apache.org/jira/browse/HDFS-8312 Project: Hadoop HDFS Issue Type: Bug Components: HDFS, security Affects Versions: 2.2.0, 2.6.0 Reporter: Eric Yang HDFS trash does not descent into child directory to check if user has permission to delete files. For example: Run the following command to initialize directory structure as super user: {code} hadoop fs -mkdir /BSS/level1 hadoop fs -mkdir /BSS/level1/level2 hadoop fs -mkdir /BSS/level1/level2/level3 hadoop fs -put /tmp/appConfig.json /BSS/level1/level2/level3/testfile.txt hadoop fs -chown user1:users /BSS/level1/level2/level3/testfile.txt hadoop fs -chown -R user1:users /BSS/level1 hadoop fs -chown -R 750 /BSS/level1 hadoop fs -chmod -R 640 /BSS/level1/level2/level3/testfile.txt hadoop fs -chmod 775 /BSS {code} Change to a normal user called user2. When trash is enabled: {code} sudo su user2 - hadoop fs -rm -r /BSS/level1 15/05/01 16:51:20 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 3600 minutes, Emptier interval = 0 minutes. Moved: 'hdfs://bdvs323.svl.ibm.com:9000/BSS/level1' to trash at: hdfs://bdvs323.svl.ibm.com:9000/user/user2/.Trash/Current {code} When trash is disabled: {code} /opt/ibm/biginsights/IHC/bin/hadoop fs -Dfs.trash.interval=0 -rm -r /BSS/level1 15/05/01 16:58:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. rm: Permission denied: user=user2, access=ALL, inode=/BSS/level1:user1:users:drwxr-x--- {code} There is inconsistency between trash behavior and delete behavior. When trash is enabled, files owned by user1 is deleted by user2. It looks like trash does not recursively validate if the child directory files can be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes
[ https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599217#comment-14599217 ] Kiran Kumar M R commented on HDFS-6440: --- Is there a plan to add this feature to branch-2? Support more than 2 NameNodes - Key: HDFS-6440 URL: https://issues.apache.org/jira/browse/HDFS-6440 Project: Hadoop HDFS Issue Type: New Feature Components: auto-failover, ha, namenode Affects Versions: 2.4.0 Reporter: Jesse Yates Assignee: Jesse Yates Fix For: 3.0.0 Attachments: Multiple-Standby-NameNodes_V1.pdf, hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch Most of the work is already done to support more than 2 NameNodes (one active, one standby). This would be the last bit to support running multiple _standby_ NameNodes; one of the standbys should be available for fail-over. Mostly, this is a matter of updating how we parse configurations, some complexity around managing the checkpointing, and updating a whole lot of tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
[ https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558089#comment-14558089 ] Kiran Kumar M R commented on HDFS-8474: --- On second thought, I checked getJNIEnv() usage in libhdfs, its used internally to invoke hdfs java APIs. There is no reason for libhdfs to export this API. I checked the Impala file which fails to compile: https://github.com/cloudera/Impala/blob/cdh5-trunk/be/src/exec/hbase-table-scanner.cc Here JNIEnv is used to invoke hbase API. Looks like Impala is using jni_helper from HDFS instead of writing their own. I think Impala is better off writing their own helper. Otherwise jni_helper may need to move to Hadoop-common and provide JNIEnv via jni_helper for all hadoop ecosystem services. Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible - Key: HDFS-8474 URL: https://issues.apache.org/jira/browse/HDFS-8474 Project: Hadoop HDFS Issue Type: Bug Components: build, libhdfs Affects Versions: 2.7.0 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4 Reporter: Varun Saxena Assignee: Varun Saxena Priority: Critical Attachments: HDFS-8474.01.patch Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4. This is because getJNIEnv is not visible in the so file. Compilation fails with below error message : ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function `impala::HBaseTableScanner::Init()': /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: undefined reference to `getJNIEnv' ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227: more undefined references to `getJNIEnv' follow collect2: ld returned 1 exit status make[3]: *** [be/build/release/service/impalad] Error 1 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2 make: *** [impalad] Error 2 Compiler Impala Failed, exit libhdfs.so.0.0.0 returns nothing when following command is run. nm -D libhdfs.so.0.0.0 | grep getJNIEnv -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
[ https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558099#comment-14558099 ] Kiran Kumar M R commented on HDFS-8474: --- Link to Impala JIRA https://issues.cloudera.org/browse/IMPALA-2029 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible - Key: HDFS-8474 URL: https://issues.apache.org/jira/browse/HDFS-8474 Project: Hadoop HDFS Issue Type: Bug Components: build, libhdfs Affects Versions: 2.7.0 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4 Reporter: Varun Saxena Assignee: Varun Saxena Priority: Critical Attachments: HDFS-8474.01.patch Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4. This is because getJNIEnv is not visible in the so file. Compilation fails with below error message : ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function `impala::HBaseTableScanner::Init()': /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: undefined reference to `getJNIEnv' ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227: more undefined references to `getJNIEnv' follow collect2: ld returned 1 exit status make[3]: *** [be/build/release/service/impalad] Error 1 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2 make: *** [impalad] Error 2 Compiler Impala Failed, exit libhdfs.so.0.0.0 returns nothing when following command is run. nm -D libhdfs.so.0.0.0 | grep getJNIEnv -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
[ https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558082#comment-14558082 ] Kiran Kumar M R commented on HDFS-8474: --- LGTM {{LIBHDFS_EXTERNAL}} is already defined in hdfs.h, but undef is done at end of file. May be that can be reused or moved to a common file instead of defining again in jni_helper.h Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible - Key: HDFS-8474 URL: https://issues.apache.org/jira/browse/HDFS-8474 Project: Hadoop HDFS Issue Type: Bug Components: build, libhdfs Affects Versions: 2.7.0 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4 Reporter: Varun Saxena Assignee: Varun Saxena Priority: Critical Attachments: HDFS-8474.01.patch Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4. This is because getJNIEnv is not visible in the so file. Compilation fails with below error message : ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function `impala::HBaseTableScanner::Init()': /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: undefined reference to `getJNIEnv' ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227: more undefined references to `getJNIEnv' follow collect2: ld returned 1 exit status make[3]: *** [be/build/release/service/impalad] Error 1 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2 make: *** [impalad] Error 2 Compiler Impala Failed, exit libhdfs.so.0.0.0 returns nothing when following command is run. nm -D libhdfs.so.0.0.0 | grep getJNIEnv -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows
[ https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-8310: -- Labels: BB2015-05-RFC (was: BB2015-05-TBR) Fix TestCLI.testAll help: help for find on Windows Key: HDFS-8310 URL: https://issues.apache.org/jira/browse/HDFS-8310 Project: Hadoop HDFS Issue Type: Sub-task Components: test Affects Versions: 2.7.0 Reporter: Xiaoyu Yao Assignee: Kiran Kumar M R Priority: Minor Labels: BB2015-05-RFC Attachments: HDFS-8310-001.patch, HDFS-8310-002.patch The test uses RegexAcrossOutputComparator in a single regex, which does not match on Windows as shown below. {code} 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(155)) - --- 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(156)) - Test ID: [31] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(157)) -Test Description: [help: help for find] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(158)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(162)) - Test Commands: [-help find] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(166)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(173)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(177)) - Comparator: [RegexpAcrossOutputComparator] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(179)) - Comparision result: [fail] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(181)) - Expected output: [-find path \.\.\. expression \.\.\. : Finds all files that match the specified expression and applies selected actions to them\. If no path is specified then defaults to the current working directory\. If no expression is specified then defaults to -print\. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing\. If -iname is used then the match is case insensitive\. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions\. Returns true if both child expressions return true\. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified\. The second expression will not be applied if the first fails\. ] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(183)) - Actual output: [-find path ... expression ... : Finds all files that match the specified expression and applies selected actions to them. If no path is specified then defaults to the current working directory. If no expression is specified then defaults to -print. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing. If -iname is used then the match is case insensitive. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions. Returns true if both child expressions return true. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified. The second expression will not be applied if the first fails. ] {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows
[ https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-8310: -- Attachment: HDFS-8310-002.patch Fix TestCLI.testAll help: help for find on Windows Key: HDFS-8310 URL: https://issues.apache.org/jira/browse/HDFS-8310 Project: Hadoop HDFS Issue Type: Sub-task Components: test Affects Versions: 2.7.0 Reporter: Xiaoyu Yao Assignee: Kiran Kumar M R Priority: Minor Labels: BB2015-05-TBR Attachments: HDFS-8310-001.patch, HDFS-8310-002.patch The test uses RegexAcrossOutputComparator in a single regex, which does not match on Windows as shown below. {code} 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(155)) - --- 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(156)) - Test ID: [31] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(157)) -Test Description: [help: help for find] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(158)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(162)) - Test Commands: [-help find] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(166)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(173)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(177)) - Comparator: [RegexpAcrossOutputComparator] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(179)) - Comparision result: [fail] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(181)) - Expected output: [-find path \.\.\. expression \.\.\. : Finds all files that match the specified expression and applies selected actions to them\. If no path is specified then defaults to the current working directory\. If no expression is specified then defaults to -print\. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing\. If -iname is used then the match is case insensitive\. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions\. Returns true if both child expressions return true\. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified\. The second expression will not be applied if the first fails\. ] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(183)) - Actual output: [-find path ... expression ... : Finds all files that match the specified expression and applies selected actions to them. If no path is specified then defaults to the current working directory. If no expression is specified then defaults to -print. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing. If -iname is used then the match is case insensitive. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions. Returns true if both child expressions return true. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified. The second expression will not be applied if the first fails. ] {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows
[ https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530125#comment-14530125 ] Kiran Kumar M R commented on HDFS-8310: --- Thanks for review Xiaoyu, I have added space and attached patch. Fix TestCLI.testAll help: help for find on Windows Key: HDFS-8310 URL: https://issues.apache.org/jira/browse/HDFS-8310 Project: Hadoop HDFS Issue Type: Sub-task Components: test Affects Versions: 2.7.0 Reporter: Xiaoyu Yao Assignee: Kiran Kumar M R Priority: Minor Labels: BB2015-05-TBR Attachments: HDFS-8310-001.patch, HDFS-8310-002.patch The test uses RegexAcrossOutputComparator in a single regex, which does not match on Windows as shown below. {code} 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(155)) - --- 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(156)) - Test ID: [31] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(157)) -Test Description: [help: help for find] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(158)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(162)) - Test Commands: [-help find] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(166)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(173)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(177)) - Comparator: [RegexpAcrossOutputComparator] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(179)) - Comparision result: [fail] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(181)) - Expected output: [-find path \.\.\. expression \.\.\. : Finds all files that match the specified expression and applies selected actions to them\. If no path is specified then defaults to the current working directory\. If no expression is specified then defaults to -print\. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing\. If -iname is used then the match is case insensitive\. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions\. Returns true if both child expressions return true\. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified\. The second expression will not be applied if the first fails\. ] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(183)) - Actual output: [-find path ... expression ... : Finds all files that match the specified expression and applies selected actions to them. If no path is specified then defaults to the current working directory. If no expression is specified then defaults to -print. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing. If -iname is used then the match is case insensitive. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions. Returns true if both child expressions return true. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified. The second expression will not be applied if the first fails. ] {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8311) DataStreamer.transfer() should timeout the socket InputStream.
[ https://issues.apache.org/jira/browse/HDFS-8311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14528057#comment-14528057 ] Kiran Kumar M R commented on HDFS-8311: --- Thats fine, reassigned DataStreamer.transfer() should timeout the socket InputStream. -- Key: HDFS-8311 URL: https://issues.apache.org/jira/browse/HDFS-8311 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Reporter: Esteban Gutierrez Assignee: Esteban Gutierrez Attachments: 0001-HDFS-8311-DataStreamer.transfer-should-timeout-the-s.patch, HDFS-8311.001.patch While validating some HA failure modes we found that HDFS clients can take a long time to recover or sometimes don't recover at all since we don't setup the socket timeout in the InputStream: {code} private void transfer () { ... ... OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout); InputStream unbufIn = NetUtils.getInputStream(sock); ... } {code} The InputStream should have its own timeout in the same way as the OutputStream. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8311) DataStreamer.transfer() should timeout the socket InputStream.
[ https://issues.apache.org/jira/browse/HDFS-8311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-8311: -- Assignee: Esteban Gutierrez (was: Kiran Kumar M R) DataStreamer.transfer() should timeout the socket InputStream. -- Key: HDFS-8311 URL: https://issues.apache.org/jira/browse/HDFS-8311 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Reporter: Esteban Gutierrez Assignee: Esteban Gutierrez Attachments: 0001-HDFS-8311-DataStreamer.transfer-should-timeout-the-s.patch, HDFS-8311.001.patch While validating some HA failure modes we found that HDFS clients can take a long time to recover or sometimes don't recover at all since we don't setup the socket timeout in the InputStream: {code} private void transfer () { ... ... OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout); InputStream unbufIn = NetUtils.getInputStream(sock); ... } {code} The InputStream should have its own timeout in the same way as the OutputStream. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HDFS-8311) DataStreamer.transfer() should timeout the socket InputStream.
[ https://issues.apache.org/jira/browse/HDFS-8311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R reassigned HDFS-8311: - Assignee: Kiran Kumar M R DataStreamer.transfer() should timeout the socket InputStream. -- Key: HDFS-8311 URL: https://issues.apache.org/jira/browse/HDFS-8311 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Reporter: Esteban Gutierrez Assignee: Kiran Kumar M R While validating some HA failure modes we found that HDFS clients can take a long time to recover or sometimes don't recover at all since we don't setup the socket timeout in the InputStream: {code} private void transfer () { ... ... OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout); InputStream unbufIn = NetUtils.getInputStream(sock); ... } {code} The InputStream should have its own timeout in the same way as the OutputStream. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HDFS-8312) Trash does not descent into child directories to check for permissions
[ https://issues.apache.org/jira/browse/HDFS-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R reassigned HDFS-8312: - Assignee: Kiran Kumar M R Trash does not descent into child directories to check for permissions -- Key: HDFS-8312 URL: https://issues.apache.org/jira/browse/HDFS-8312 Project: Hadoop HDFS Issue Type: Bug Components: HDFS, security Affects Versions: 2.2.0, 2.6.0 Reporter: Eric Yang Assignee: Kiran Kumar M R HDFS trash does not descent into child directory to check if user has permission to delete files. For example: Run the following command to initialize directory structure as super user: {code} hadoop fs -mkdir /BSS/level1 hadoop fs -mkdir /BSS/level1/level2 hadoop fs -mkdir /BSS/level1/level2/level3 hadoop fs -put /tmp/appConfig.json /BSS/level1/level2/level3/testfile.txt hadoop fs -chown user1:users /BSS/level1/level2/level3/testfile.txt hadoop fs -chown -R user1:users /BSS/level1 hadoop fs -chown -R 750 /BSS/level1 hadoop fs -chmod -R 640 /BSS/level1/level2/level3/testfile.txt hadoop fs -chmod 775 /BSS {code} Change to a normal user called user2. When trash is enabled: {code} sudo su user2 - hadoop fs -rm -r /BSS/level1 15/05/01 16:51:20 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 3600 minutes, Emptier interval = 0 minutes. Moved: 'hdfs://bdvs323.svl.ibm.com:9000/BSS/level1' to trash at: hdfs://bdvs323.svl.ibm.com:9000/user/user2/.Trash/Current {code} When trash is disabled: {code} /opt/ibm/biginsights/IHC/bin/hadoop fs -Dfs.trash.interval=0 -rm -r /BSS/level1 15/05/01 16:58:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. rm: Permission denied: user=user2, access=ALL, inode=/BSS/level1:user1:users:drwxr-x--- {code} There is inconsistency between trash behavior and delete behavior. When trash is enabled, files owned by user1 is deleted by user2. It looks like trash does not recursively validate if the child directory files can be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows
[ https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R reassigned HDFS-8310: - Assignee: Kiran Kumar M R Fix TestCLI.testAll help: help for find on Windows Key: HDFS-8310 URL: https://issues.apache.org/jira/browse/HDFS-8310 Project: Hadoop HDFS Issue Type: Sub-task Components: test Affects Versions: 2.7.0 Reporter: Xiaoyu Yao Assignee: Kiran Kumar M R Priority: Minor The test uses RegexAcrossOutputComparator in a single regex, which does not match on Windows as shown below. {code} 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(155)) - --- 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(156)) - Test ID: [31] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(157)) -Test Description: [help: help for find] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(158)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(162)) - Test Commands: [-help find] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(166)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(173)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(177)) - Comparator: [RegexpAcrossOutputComparator] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(179)) - Comparision result: [fail] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(181)) - Expected output: [-find path \.\.\. expression \.\.\. : Finds all files that match the specified expression and applies selected actions to them\. If no path is specified then defaults to the current working directory\. If no expression is specified then defaults to -print\. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing\. If -iname is used then the match is case insensitive\. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions\. Returns true if both child expressions return true\. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified\. The second expression will not be applied if the first fails\. ] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(183)) - Actual output: [-find path ... expression ... : Finds all files that match the specified expression and applies selected actions to them. If no path is specified then defaults to the current working directory. If no expression is specified then defaults to -print. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing. If -iname is used then the match is case insensitive. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions. Returns true if both child expressions return true. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified. The second expression will not be applied if the first fails. ] {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows
[ https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-8310: -- Attachment: HDFS-8310-001.patch Fix TestCLI.testAll help: help for find on Windows Key: HDFS-8310 URL: https://issues.apache.org/jira/browse/HDFS-8310 Project: Hadoop HDFS Issue Type: Sub-task Components: test Affects Versions: 2.7.0 Reporter: Xiaoyu Yao Assignee: Kiran Kumar M R Priority: Minor Attachments: HDFS-8310-001.patch The test uses RegexAcrossOutputComparator in a single regex, which does not match on Windows as shown below. {code} 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(155)) - --- 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(156)) - Test ID: [31] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(157)) -Test Description: [help: help for find] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(158)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(162)) - Test Commands: [-help find] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(166)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(173)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(177)) - Comparator: [RegexpAcrossOutputComparator] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(179)) - Comparision result: [fail] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(181)) - Expected output: [-find path \.\.\. expression \.\.\. : Finds all files that match the specified expression and applies selected actions to them\. If no path is specified then defaults to the current working directory\. If no expression is specified then defaults to -print\. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing\. If -iname is used then the match is case insensitive\. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions\. Returns true if both child expressions return true\. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified\. The second expression will not be applied if the first fails\. ] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(183)) - Actual output: [-find path ... expression ... : Finds all files that match the specified expression and applies selected actions to them. If no path is specified then defaults to the current working directory. If no expression is specified then defaults to -print. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing. If -iname is used then the match is case insensitive. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions. Returns true if both child expressions return true. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified. The second expression will not be applied if the first fails. ] {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows
[ https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14525471#comment-14525471 ] Kiran Kumar M R commented on HDFS-8310: --- Test case failure is due to {{CR(\r)}} carriage return character in new line. Output of command execution hadoop fs -help find on windows is having both {{\r\n}} characters in new line. Expected output string is taken from testConf.xml, this will only have {{\n}} characters for new line. XML Parser (SAX) will normalize new line on all platforms. Due to this difference in newline characters, test output comparison is failing. {{RegexpAcrossOutputComparator}} was intended for comparing multi-line outputs. I have modified it to cleanup {{\r}} from input parameters before doing regex comparison. {code} if(Shell.WINDOWS){ actual = actual.replaceAll(\\r, ); expected = expected.replaceAll(\\r, ); } {code} After this modification test cases are passing, Review the fix. Fix TestCLI.testAll help: help for find on Windows Key: HDFS-8310 URL: https://issues.apache.org/jira/browse/HDFS-8310 Project: Hadoop HDFS Issue Type: Sub-task Components: test Affects Versions: 2.7.0 Reporter: Xiaoyu Yao Assignee: Kiran Kumar M R Priority: Minor The test uses RegexAcrossOutputComparator in a single regex, which does not match on Windows as shown below. {code} 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(155)) - --- 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(156)) - Test ID: [31] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(157)) -Test Description: [help: help for find] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(158)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(162)) - Test Commands: [-help find] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(166)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(173)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(177)) - Comparator: [RegexpAcrossOutputComparator] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(179)) - Comparision result: [fail] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(181)) - Expected output: [-find path \.\.\. expression \.\.\. : Finds all files that match the specified expression and applies selected actions to them\. If no path is specified then defaults to the current working directory\. If no expression is specified then defaults to -print\. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing\. If -iname is used then the match is case insensitive\. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions\. Returns true if both child expressions return true\. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified\. The second expression will not be applied if the first fails\. ] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(183)) - Actual output: [-find path ... expression ... : Finds all files that match the specified expression and applies selected actions to them. If no path is specified then defaults to the current working directory. If no expression is specified then defaults to -print. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing. If -iname is used then the match is case insensitive. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character
[jira] [Updated] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows
[ https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-8310: -- Target Version/s: 3.0.0, 2.8.0, 2.7.1 Status: Patch Available (was: Open) Fix TestCLI.testAll help: help for find on Windows Key: HDFS-8310 URL: https://issues.apache.org/jira/browse/HDFS-8310 Project: Hadoop HDFS Issue Type: Sub-task Components: test Affects Versions: 2.7.0 Reporter: Xiaoyu Yao Assignee: Kiran Kumar M R Priority: Minor Attachments: HDFS-8310-001.patch The test uses RegexAcrossOutputComparator in a single regex, which does not match on Windows as shown below. {code} 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(155)) - --- 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(156)) - Test ID: [31] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(157)) -Test Description: [help: help for find] 2015-04-30 01:14:01,737 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(158)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(162)) - Test Commands: [-help find] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(166)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(173)) - 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(177)) - Comparator: [RegexpAcrossOutputComparator] 2015-04-30 01:14:01,738 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(179)) - Comparision result: [fail] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(181)) - Expected output: [-find path \.\.\. expression \.\.\. : Finds all files that match the specified expression and applies selected actions to them\. If no path is specified then defaults to the current working directory\. If no expression is specified then defaults to -print\. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing\. If -iname is used then the match is case insensitive\. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions\. Returns true if both child expressions return true\. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified\. The second expression will not be applied if the first fails\. ] 2015-04-30 01:14:01,739 INFO cli.CLITestHelper (CLITestHelper.java:displayResults(183)) - Actual output: [-find path ... expression ... : Finds all files that match the specified expression and applies selected actions to them. If no path is specified then defaults to the current working directory. If no expression is specified then defaults to -print. The following primary expressions are recognised: -name pattern -iname pattern Evaluates as true if the basename of the file matches the pattern using standard file system globbing. If -iname is used then the match is case insensitive. -print -print0 Always evaluates to true. Causes the current pathname to be written to standard output followed by a newline. If the -print0 expression is used then an ASCII NULL character is appended rather than a newline. The following operators are recognised: expression -a expression expression -and expression expression expression Logical AND operator for joining two expressions. Returns true if both child expressions return true. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified. The second expression will not be applied if the first fails. ] {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8162) Stack trace routed to standard out
[ https://issues.apache.org/jira/browse/HDFS-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-8162: -- Assignee: (was: Kiran Kumar M R) Stack trace routed to standard out -- Key: HDFS-8162 URL: https://issues.apache.org/jira/browse/HDFS-8162 Project: Hadoop HDFS Issue Type: Improvement Components: libhdfs Affects Versions: 2.5.2 Reporter: Rod Priority: Minor Calling hdfsOpenFile() can generate a stacktrace printout to standard out, which can be problematic for caller program which is making use of standard out. libhdfs stacktraces should be routed to standard error. Example of stacktrace: WARN [main] hdfs.BlockReaderFactory (BlockReaderFactory.java:getRemoteBlockReaderFromTcp(693)) - I/O error constructing remote block reader. org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010] at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533) at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101) at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854) at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143) 2015-04-16 10:32:13,946 WARN [main] hdfs.DFSClient (DFSInputStream.java:blockSeekTo(612)) - Failed to connect to /x.x.x.10:50010 for block, add to deadNodes and continue. org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010] org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010] at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533) at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101) at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854) at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HDFS-8162) Stack trace routed to standard out
[ https://issues.apache.org/jira/browse/HDFS-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R reassigned HDFS-8162: - Assignee: Kiran Kumar M R Stack trace routed to standard out -- Key: HDFS-8162 URL: https://issues.apache.org/jira/browse/HDFS-8162 Project: Hadoop HDFS Issue Type: Improvement Components: libhdfs Affects Versions: 2.5.2 Reporter: Rod Assignee: Kiran Kumar M R Priority: Minor Calling hdfsOpenFile() can generate a stacktrace printout to standard out, which can be problematic for caller program which is making use of standard out. libhdfs stacktraces should be routed to standard error. Example of stacktrace: WARN [main] hdfs.BlockReaderFactory (BlockReaderFactory.java:getRemoteBlockReaderFromTcp(693)) - I/O error constructing remote block reader. org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010] at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533) at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101) at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854) at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143) 2015-04-16 10:32:13,946 WARN [main] hdfs.DFSClient (DFSInputStream.java:blockSeekTo(612)) - Failed to connect to /x.x.x.10:50010 for block, add to deadNodes and continue. org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010] org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010] at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533) at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101) at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854) at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7938) OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac
[ https://issues.apache.org/jira/browse/HDFS-7938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-7938: -- Attachment: HDFS-7938-001.patch OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac Key: HDFS-7938 URL: https://issues.apache.org/jira/browse/HDFS-7938 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Colin Patrick McCabe Assignee: Kiran Kumar M R Priority: Critical Attachments: HDFS-7938-001.patch In OpensslSecureRandom.c, pthread_threadid_np is being used with an unsigned long, but the type signature requires a uint64_t. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7938) OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac
[ https://issues.apache.org/jira/browse/HDFS-7938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-7938: -- Status: Patch Available (was: Open) Chris, need your help to verify this patch on Mac OS X OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac Key: HDFS-7938 URL: https://issues.apache.org/jira/browse/HDFS-7938 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Colin Patrick McCabe Assignee: Kiran Kumar M R Priority: Critical Attachments: HDFS-7938-001.patch In OpensslSecureRandom.c, pthread_threadid_np is being used with an unsigned long, but the type signature requires a uint64_t. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7938) OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac
[ https://issues.apache.org/jira/browse/HDFS-7938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364704#comment-14364704 ] Kiran Kumar M R commented on HDFS-7938: --- I am continuing discussion on [~cmccabe] comment https://issues.apache.org/jira/browse/HADOOP-11638?focusedCommentId=14364215page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14364215 bq. I looked at this and found that {{pthread_threadid_np}} on Mac has the type signature: {{int pthread_threadid_np(pthread_t thread, __uint64_t *thread_id)}} This doesn't match with using an {{unsigned long}}. I'm not sure under what conditions an unsigned long is different than a {{uint64_t}} on Mac (on Linux, that would be the case with 32-bit compilation). So this patch may have a buffer overflow in that case. I agree there may be a buffer overflow in case of 32-bit compilation on Mac. I went ahead with patch since 64-bit build was mostly used. I will submit a patch soon to use {{uint64_t}} and cast it to {{unsigned long}} OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac Key: HDFS-7938 URL: https://issues.apache.org/jira/browse/HDFS-7938 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Colin Patrick McCabe Assignee: Kiran Kumar M R Priority: Critical In OpensslSecureRandom.c, pthread_threadid_np is being used with an unsigned long, but the type signature requires a uint64_t. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HDFS-7938) OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac
[ https://issues.apache.org/jira/browse/HDFS-7938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R reassigned HDFS-7938: - Assignee: Kiran Kumar M R OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac Key: HDFS-7938 URL: https://issues.apache.org/jira/browse/HDFS-7938 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Colin Patrick McCabe Assignee: Kiran Kumar M R Priority: Critical In OpensslSecureRandom.c, pthread_threadid_np is being used with an unsigned long, but the type signature requires a uint64_t. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HDFS-7899) Improve EOF error message
[ https://issues.apache.org/jira/browse/HDFS-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R reassigned HDFS-7899: - Assignee: Kiran Kumar M R Improve EOF error message - Key: HDFS-7899 URL: https://issues.apache.org/jira/browse/HDFS-7899 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.6.0 Reporter: Harsh J Assignee: Kiran Kumar M R Priority: Minor Currently, a DN disconnection for reasons other than connection timeout or refused messages, such as an EOF message as a result of rejection or other network fault, reports in this manner: {code} WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /x.x.x.x: for block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no length prefix available java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:392) at org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:137) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1103) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:538) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:750) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:794) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:602) {code} This is not very clear to a user (warn's at the hdfs-client). It could likely be improved with a more diagnosable message, or at least the direct reason than an EOF. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7774) Unresolved symbols error while compiling HDFS on Windows 7/32 bit
[ https://issues.apache.org/jira/browse/HDFS-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14342907#comment-14342907 ] Kiran Kumar M R commented on HDFS-7774: --- Thanks for the review and committing the patch Chris. Unresolved symbols error while compiling HDFS on Windows 7/32 bit - Key: HDFS-7774 URL: https://issues.apache.org/jira/browse/HDFS-7774 Project: Hadoop HDFS Issue Type: Bug Components: build, native Affects Versions: 2.6.0 Environment: Windows 7, 32 bit, Visual Studio 10. Windows PATH: PATH=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; SDK Path: PATH=C:\Windows\Microsoft.NET\Framework\v4.0.30319;C:\Windows\Microsoft.NET\Framework\v3.5;;C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools;;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin\VCPackages;;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin;;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; Reporter: Venkatasubramaniam Ramakrishnan Assignee: Kiran Kumar M R Priority: Critical Labels: build Fix For: 2.7.0 Attachments: HDFS-7774-001.patch, HDFS-7774-002.patch, Win32_Changes-temp.patch I am getting the following error in the hdfs module compilation: . . . [exec] ClCompile: [exec] All outputs are up-to-date. [exec] Lib: [exec] All outputs are up-to-date. [exec] hdfs_static.vcxproj - D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.lib [exec] FinalizeBuildStatus: [exec] Deleting file hdfs_static.dir\RelWithDebInfo\hdfs_static.unsuccessfulbuild. [exec] Touching hdfs_static.dir\RelWithDebInfo\hdfs_static.lastbuildstate. [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs_static.vcxproj (default targets). [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default target) (1) - [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj (default target) (3) - [exec] (Link target) - [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol _tls_used [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol pTlsCallback [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.dll : fatal error LNK1120: 2 unresolved externals [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] [exec] 0 Warning(s) [exec] 3 Error(s) [exec] [exec] Time Elapsed 00:00:40.39 [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS . FAILURE [02:27 min] [INFO] Apache Hadoop HttpFS ... SKIPPED -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7774) Unresolved symbols error while compiling HDFS on Windows 7/32 bit
[ https://issues.apache.org/jira/browse/HDFS-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14338795#comment-14338795 ] Kiran Kumar M R commented on HDFS-7774: --- I ran the failed test case {{ org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestart}} locally. Its passing. Testcase failure is not related to changes in the patch. Unresolved symbols error while compiling HDFS on Windows 7/32 bit - Key: HDFS-7774 URL: https://issues.apache.org/jira/browse/HDFS-7774 Project: Hadoop HDFS Issue Type: Bug Components: build, native Affects Versions: 2.6.0 Environment: Windows 7, 32 bit, Visual Studio 10. Windows PATH: PATH=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; SDK Path: PATH=C:\Windows\Microsoft.NET\Framework\v4.0.30319;C:\Windows\Microsoft.NET\Framework\v3.5;;C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools;;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin\VCPackages;;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin;;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; Reporter: Venkatasubramaniam Ramakrishnan Assignee: Kiran Kumar M R Priority: Critical Labels: build Attachments: HDFS-7774-001.patch, HDFS-7774-002.patch, Win32_Changes-temp.patch I am getting the following error in the hdfs module compilation: . . . [exec] ClCompile: [exec] All outputs are up-to-date. [exec] Lib: [exec] All outputs are up-to-date. [exec] hdfs_static.vcxproj - D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.lib [exec] FinalizeBuildStatus: [exec] Deleting file hdfs_static.dir\RelWithDebInfo\hdfs_static.unsuccessfulbuild. [exec] Touching hdfs_static.dir\RelWithDebInfo\hdfs_static.lastbuildstate. [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs_static.vcxproj (default targets). [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default target) (1) - [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj (default target) (3) - [exec] (Link target) - [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol _tls_used [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol pTlsCallback [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.dll : fatal error LNK1120: 2 unresolved externals [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] [exec] 0 Warning(s) [exec] 3 Error(s) [exec] [exec] Time Elapsed 00:00:40.39 [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS . FAILURE [02:27 min] [INFO] Apache Hadoop HttpFS ... SKIPPED -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7774) Unresolved symbols error while compiling HDFS on Windows 7/32 bit
[ https://issues.apache.org/jira/browse/HDFS-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14338044#comment-14338044 ] Kiran Kumar M R commented on HDFS-7774: --- Thanks for the suggestion [~cnauroth]. Check the new patch, I have added ant condition to make cmake generator parametrized. Tests passed for me. I had set {{-Xmx512M}} I do not have access to pastebin from workplace to post full test logs. Pasting some test output snippets here. {code} hadoop-hdfs-project\hadoop-hdfsmvn test -Dtest=test_libhdfs_threaded . . main: [echo] Running test_libhdfs_threaded . . . [exec] testHdfsOperations(threadIdx=0): starting [exec] testHdfsOperations(threadIdx=1): starting [exec] testHdfsOperations(threadIdx=2): starting. . . . [exec] 2015-02-26 10:54:58,772 INFO mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@127.0.0.1:0 [exec] 2015-02-26 10:54:58,773 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) - Stopping DataNode metrics system... [exec] 2015-02-26 10:54:58,774 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) - DataNode metrics system stopped. [exec] 2015-02-26 10:54:58,774 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(600)) - DataNode metrics system shutdown complete. [echo] Finished test_native_mini_dfs [INFO] Executed tasks [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 1:42.793s [INFO] Finished at: Thu Feb 26 10:54:59 GMT+05:30 2015 [INFO] Final Memory: 35M/494M [INFO] {code} Unresolved symbols error while compiling HDFS on Windows 7/32 bit - Key: HDFS-7774 URL: https://issues.apache.org/jira/browse/HDFS-7774 Project: Hadoop HDFS Issue Type: Bug Components: build, native Affects Versions: 2.6.0 Environment: Windows 7, 32 bit, Visual Studio 10. Windows PATH: PATH=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; SDK Path: PATH=C:\Windows\Microsoft.NET\Framework\v4.0.30319;C:\Windows\Microsoft.NET\Framework\v3.5;;C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools;;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin\VCPackages;;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin;;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; Reporter: Venkatasubramaniam Ramakrishnan Assignee: Kiran Kumar M R Priority: Critical Labels: build Attachments: HDFS-7774-001.patch, HDFS-7774-002.patch, Win32_Changes-temp.patch I am getting the following error in the hdfs module compilation: . . . [exec] ClCompile: [exec] All outputs are up-to-date. [exec] Lib: [exec] All outputs are up-to-date. [exec] hdfs_static.vcxproj - D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.lib [exec] FinalizeBuildStatus: [exec] Deleting file hdfs_static.dir\RelWithDebInfo\hdfs_static.unsuccessfulbuild. [exec] Touching hdfs_static.dir\RelWithDebInfo\hdfs_static.lastbuildstate. [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs_static.vcxproj (default targets). [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default target) (1) -
[jira] [Updated] (HDFS-7774) Unresolved symbols error while compiling HDFS on Windows 7/32 bit
[ https://issues.apache.org/jira/browse/HDFS-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-7774: -- Attachment: HDFS-7774-002.patch Unresolved symbols error while compiling HDFS on Windows 7/32 bit - Key: HDFS-7774 URL: https://issues.apache.org/jira/browse/HDFS-7774 Project: Hadoop HDFS Issue Type: Bug Components: build, native Affects Versions: 2.6.0 Environment: Windows 7, 32 bit, Visual Studio 10. Windows PATH: PATH=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; SDK Path: PATH=C:\Windows\Microsoft.NET\Framework\v4.0.30319;C:\Windows\Microsoft.NET\Framework\v3.5;;C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools;;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin\VCPackages;;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin;;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; Reporter: Venkatasubramaniam Ramakrishnan Assignee: Kiran Kumar M R Priority: Critical Labels: build Attachments: HDFS-7774-001.patch, HDFS-7774-002.patch, Win32_Changes-temp.patch I am getting the following error in the hdfs module compilation: . . . [exec] ClCompile: [exec] All outputs are up-to-date. [exec] Lib: [exec] All outputs are up-to-date. [exec] hdfs_static.vcxproj - D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.lib [exec] FinalizeBuildStatus: [exec] Deleting file hdfs_static.dir\RelWithDebInfo\hdfs_static.unsuccessfulbuild. [exec] Touching hdfs_static.dir\RelWithDebInfo\hdfs_static.lastbuildstate. [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs_static.vcxproj (default targets). [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default target) (1) - [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj (default target) (3) - [exec] (Link target) - [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol _tls_used [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol pTlsCallback [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.dll : fatal error LNK1120: 2 unresolved externals [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] [exec] 0 Warning(s) [exec] 3 Error(s) [exec] [exec] Time Elapsed 00:00:40.39 [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS . FAILURE [02:27 min] [INFO] Apache Hadoop HttpFS ... SKIPPED -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7774) Unresolved symbols error while compiling HDFS on Windows 7/32 bit
[ https://issues.apache.org/jira/browse/HDFS-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-7774: -- Attachment: HDFS-7774-001.patch Unresolved symbols error while compiling HDFS on Windows 7/32 bit - Key: HDFS-7774 URL: https://issues.apache.org/jira/browse/HDFS-7774 Project: Hadoop HDFS Issue Type: Bug Components: build Affects Versions: 2.6.0 Environment: Windows 7, 32 bit, Visual Studio 10. Windows PATH: PATH=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; SDK Path: PATH=C:\Windows\Microsoft.NET\Framework\v4.0.30319;C:\Windows\Microsoft.NET\Framework\v3.5;;C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools;;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin\VCPackages;;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin;;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; Reporter: Venkatasubramaniam Ramakrishnan Assignee: Kiran Kumar M R Priority: Critical Labels: build Attachments: HDFS-7774-001.patch, Win32_Changes-temp.patch I am getting the following error in the hdfs module compilation: . . . [exec] ClCompile: [exec] All outputs are up-to-date. [exec] Lib: [exec] All outputs are up-to-date. [exec] hdfs_static.vcxproj - D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.lib [exec] FinalizeBuildStatus: [exec] Deleting file hdfs_static.dir\RelWithDebInfo\hdfs_static.unsuccessfulbuild. [exec] Touching hdfs_static.dir\RelWithDebInfo\hdfs_static.lastbuildstate. [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs_static.vcxproj (default targets). [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default target) (1) - [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj (default target) (3) - [exec] (Link target) - [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol _tls_used [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol pTlsCallback [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.dll : fatal error LNK1120: 2 unresolved externals [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] [exec] 0 Warning(s) [exec] 3 Error(s) [exec] [exec] Time Elapsed 00:00:40.39 [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS . FAILURE [02:27 min] [INFO] Apache Hadoop HttpFS ... SKIPPED -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7774) Unresolved symbols error while compiling HDFS on Windows 7/32 bit
[ https://issues.apache.org/jira/browse/HDFS-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-7774: -- Target Version/s: 3.0.0, 2.7.0 Status: Patch Available (was: Open) HDFS-7774-001.patch makes code compatible with Win32 TLS conventions. In Win32 all TLS symbols are prefixed with underscore So fix can be done as below {code} #ifdef _WIN64 #pragma comment(linker, /INCLUDE:_tls_used) #else #pragma comment(linker, /INCLUDE:__tls_used) #endif #ifdef _WIN64 #pragma comment(linker, /INCLUDE:pTlsCallback) #else #pragma comment(linker, /INCLUDE:_pTlsCallback) #endif {code} I was not able to find a simple way to parametrize cmake build generator Visual Studio 10 Win64 in hadoop-hdfs-project/hadoop-hdfs/pom.xml for Win32. One option is to add a new profile native-win32. Till then user needs to modify pom.xml manually. [~cnauroth], please review this patch. Let me know if you have any suggestions to parametrize win32 build. If no good option for 2.7 release, we can ask user to modify pom.xml manually. This can be added in readme. How to build on win32 using this patch: - Apply this patch - Edit hadoop-hdfs-project/hadoop-hdfs/pom.xml search and modify {{Visual Studio 10 Win64}} to {{Visual Studio 10}} - Set env varibale Platform=Win32 - mvn install -Pnative-win -DskipTests Unresolved symbols error while compiling HDFS on Windows 7/32 bit - Key: HDFS-7774 URL: https://issues.apache.org/jira/browse/HDFS-7774 Project: Hadoop HDFS Issue Type: Bug Components: build Affects Versions: 2.6.0 Environment: Windows 7, 32 bit, Visual Studio 10. Windows PATH: PATH=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; SDK Path: PATH=C:\Windows\Microsoft.NET\Framework\v4.0.30319;C:\Windows\Microsoft.NET\Framework\v3.5;;C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools;;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin\VCPackages;;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin;;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; Reporter: Venkatasubramaniam Ramakrishnan Assignee: Kiran Kumar M R Priority: Critical Labels: build Attachments: HDFS-7774-001.patch, Win32_Changes-temp.patch I am getting the following error in the hdfs module compilation: . . . [exec] ClCompile: [exec] All outputs are up-to-date. [exec] Lib: [exec] All outputs are up-to-date. [exec] hdfs_static.vcxproj - D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.lib [exec] FinalizeBuildStatus: [exec] Deleting file hdfs_static.dir\RelWithDebInfo\hdfs_static.unsuccessfulbuild. [exec] Touching hdfs_static.dir\RelWithDebInfo\hdfs_static.lastbuildstate. [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs_static.vcxproj (default targets). [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default target) (1) - [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj (default target) (3) - [exec] (Link target) - [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol _tls_used [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol pTlsCallback
[jira] [Commented] (HDFS-7774) Unresolved symbols error while compiling HDFS on Windows 7/32 bit
[ https://issues.apache.org/jira/browse/HDFS-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317798#comment-14317798 ] Kiran Kumar M R commented on HDFS-7774: --- Following changes are required to make compilation successful 1. hadoop-hdfs-project\hadoop-hdfs\src\main\native\libhdfs\os\windows\thread.c Line 31: Add {color:red}WINAPI{color} to declaration {code}31: static DWORD WINAPI runThread(LPVOID toRun) { {code} 2. hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread_local_storage.c Comment lines 99 and 105 {code} 99: //#pragma comment(linker, /INCLUDE:_tls_used) 105://#pragma comment(linker, /INCLUDE:pTlsCallback) {code} Unresolved symbols error while compiling HDFS on Windows 7/32 bit - Key: HDFS-7774 URL: https://issues.apache.org/jira/browse/HDFS-7774 Project: Hadoop HDFS Issue Type: Bug Components: build Affects Versions: 2.6.0 Environment: Windows 7, 32 bit, Visual Studio 10. Windows PATH: PATH=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; SDK Path: PATH=C:\Windows\Microsoft.NET\Framework\v4.0.30319;C:\Windows\Microsoft.NET\Framework\v3.5;;C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools;;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin\VCPackages;;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin;;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; Reporter: Venkatasubramaniam Ramakrishnan Priority: Critical Labels: build I am getting the following error in the hdfs module compilation: . . . [exec] ClCompile: [exec] All outputs are up-to-date. [exec] Lib: [exec] All outputs are up-to-date. [exec] hdfs_static.vcxproj - D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.lib [exec] FinalizeBuildStatus: [exec] Deleting file hdfs_static.dir\RelWithDebInfo\hdfs_static.unsuccessfulbuild. [exec] Touching hdfs_static.dir\RelWithDebInfo\hdfs_static.lastbuildstate. [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs_static.vcxproj (default targets). [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default target) (1) - [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj (default target) (3) - [exec] (Link target) - [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol _tls_used [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol pTlsCallback [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.dll : fatal error LNK1120: 2 unresolved externals [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] [exec] 0 Warning(s) [exec] 3 Error(s) [exec] [exec] Time Elapsed 00:00:40.39 [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS . FAILURE [02:27 min] [INFO] Apache Hadoop HttpFS ... SKIPPED -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7774) Unresolved symbols error while compiling HDFS on Windows 7/32 bit
[ https://issues.apache.org/jira/browse/HDFS-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R updated HDFS-7774: -- Attachment: Win32_Changes-temp.patch This a temporary patch to apply and make Win32 compilation successful. This patch is not to be merged with trunk code. Need to come up with better solution to make native code work on both 64bit and 32bit Unresolved symbols error while compiling HDFS on Windows 7/32 bit - Key: HDFS-7774 URL: https://issues.apache.org/jira/browse/HDFS-7774 Project: Hadoop HDFS Issue Type: Bug Components: build Affects Versions: 2.6.0 Environment: Windows 7, 32 bit, Visual Studio 10. Windows PATH: PATH=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; SDK Path: PATH=C:\Windows\Microsoft.NET\Framework\v4.0.30319;C:\Windows\Microsoft.NET\Framework\v3.5;;C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools;;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin\VCPackages;;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin;;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; Reporter: Venkatasubramaniam Ramakrishnan Assignee: Kiran Kumar M R Priority: Critical Labels: build Attachments: Win32_Changes-temp.patch I am getting the following error in the hdfs module compilation: . . . [exec] ClCompile: [exec] All outputs are up-to-date. [exec] Lib: [exec] All outputs are up-to-date. [exec] hdfs_static.vcxproj - D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.lib [exec] FinalizeBuildStatus: [exec] Deleting file hdfs_static.dir\RelWithDebInfo\hdfs_static.unsuccessfulbuild. [exec] Touching hdfs_static.dir\RelWithDebInfo\hdfs_static.lastbuildstate. [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs_static.vcxproj (default targets). [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default target) (1) - [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj (default target) (3) - [exec] (Link target) - [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol _tls_used [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol pTlsCallback [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.dll : fatal error LNK1120: 2 unresolved externals [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] [exec] 0 Warning(s) [exec] 3 Error(s) [exec] [exec] Time Elapsed 00:00:40.39 [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS . FAILURE [02:27 min] [INFO] Apache Hadoop HttpFS ... SKIPPED -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HDFS-7774) Unresolved symbols error while compiling HDFS on Windows 7/32 bit
[ https://issues.apache.org/jira/browse/HDFS-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R reassigned HDFS-7774: - Assignee: Kiran Kumar M R Unresolved symbols error while compiling HDFS on Windows 7/32 bit - Key: HDFS-7774 URL: https://issues.apache.org/jira/browse/HDFS-7774 Project: Hadoop HDFS Issue Type: Bug Components: build Affects Versions: 2.6.0 Environment: Windows 7, 32 bit, Visual Studio 10. Windows PATH: PATH=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; SDK Path: PATH=C:\Windows\Microsoft.NET\Framework\v4.0.30319;C:\Windows\Microsoft.NET\Framework\v3.5;;C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools;;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin;C:\Program Files\Microsoft Visual Studio 10.0\VC\Bin\VCPackages;;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools;C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin;;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\PIG\pig-0.13.0\bin;C:\PROGRA~1\JAVA\JDK1.7.0_71\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\GNUWIN32\GETGNUWIN32\BIN;C:\CYGWIN\BIN;D:\git\cmd;D:\GIT\BIN;D:\MAVEN-3-2-3\APACHE-MAVEN-3.2.3-BIN\apache-maven-3.2.3\bin;D:\UTILS;c:\windows\Microsoft.NET\Framework\v4.0.30319;D:\cmake\bin;c:\progra~1\Micros~1.0\vc\crt\src; Reporter: Venkatasubramaniam Ramakrishnan Assignee: Kiran Kumar M R Priority: Critical Labels: build I am getting the following error in the hdfs module compilation: . . . [exec] ClCompile: [exec] All outputs are up-to-date. [exec] Lib: [exec] All outputs are up-to-date. [exec] hdfs_static.vcxproj - D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.lib [exec] FinalizeBuildStatus: [exec] Deleting file hdfs_static.dir\RelWithDebInfo\hdfs_static.unsuccessfulbuild. [exec] Touching hdfs_static.dir\RelWithDebInfo\hdfs_static.lastbuildstate. [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs_static.vcxproj (default targets). [exec] Done Building Project D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\ALL_BUILD.vcxproj (default target) (1) - [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj (default target) (3) - [exec] (Link target) - [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol _tls_used [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] thread_local_storage.obj : error LNK2001: unresolved external symbol pTlsCallback [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\target\bin\RelWithDebInfo\hdfs.dll : fatal error LNK1120: 2 unresolved externals [D:\h\hadoop-2.6.0-src\hadoop-hdfs-project\hadoop-hdfs\target\native\hdfs.vcxproj] [exec] [exec] 0 Warning(s) [exec] 3 Error(s) [exec] [exec] Time Elapsed 00:00:40.39 [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS . FAILURE [02:27 min] [INFO] Apache Hadoop HttpFS ... SKIPPED -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HDFS-7492) If multiple threads call FsVolumeList#checkDirs at the same time, we should only do checkDirs once and give the results to all waiting threads
[ https://issues.apache.org/jira/browse/HDFS-7492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R reassigned HDFS-7492: - Assignee: Kiran Kumar M R If multiple threads call FsVolumeList#checkDirs at the same time, we should only do checkDirs once and give the results to all waiting threads -- Key: HDFS-7492 URL: https://issues.apache.org/jira/browse/HDFS-7492 Project: Hadoop HDFS Issue Type: Improvement Reporter: Colin Patrick McCabe Assignee: Kiran Kumar M R checkDirs is called when we encounter certain I/O errors. It's rare to get just a single I/O error... normally you start getting many errors when a disk is going bad. For this reason, we shouldn't start a new checkDirs scan for each error. Instead, if multiple threads call FsVolumeList#checkDirs at around the same time, we should only do checkDirs once and give the results to all the waiting threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-7137) HDFS Federation -- Adding a new Namenode to an existing HDFS cluster Document Has an Error
[ https://issues.apache.org/jira/browse/HDFS-7137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R resolved HDFS-7137. --- Resolution: Duplicate Fix Version/s: 3.0.0 Closed as resolved as patch given in HDFS-7667 fixes this issue HDFS Federation -- Adding a new Namenode to an existing HDFS cluster Document Has an Error Key: HDFS-7137 URL: https://issues.apache.org/jira/browse/HDFS-7137 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: zhangyubiao Assignee: Kiran Kumar M R Priority: Minor Labels: documentation Fix For: 3.0.0 In Document HDFS Federation -- Adding a new Namenode to an existing HDFS cluster $HADOOP_PREFIX_HOME/bin/hdfs dfadmin -refreshNameNode datanode_host_name:datanode_rpc_port should be $HADOOP_PREFIX_HOME/bin/hdfs dfsadmin -refreshNameNode datanode_host_name:datanode_rpc_port It just miss s in dfadmin -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HDFS-7137) HDFS Federation -- Adding a new Namenode to an existing HDFS cluster Document Has an Error
[ https://issues.apache.org/jira/browse/HDFS-7137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kiran Kumar M R reassigned HDFS-7137: - Assignee: Kiran Kumar M R HDFS Federation -- Adding a new Namenode to an existing HDFS cluster Document Has an Error Key: HDFS-7137 URL: https://issues.apache.org/jira/browse/HDFS-7137 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: zhangyubiao Assignee: Kiran Kumar M R Priority: Minor Labels: documentation In Document HDFS Federation -- Adding a new Namenode to an existing HDFS cluster $HADOOP_PREFIX_HOME/bin/hdfs dfadmin -refreshNameNode datanode_host_name:datanode_rpc_port should be $HADOOP_PREFIX_HOME/bin/hdfs dfsadmin -refreshNameNode datanode_host_name:datanode_rpc_port It just miss s in dfadmin -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7137) HDFS Federation -- Adding a new Namenode to an existing HDFS cluster Document Has an Error
[ https://issues.apache.org/jira/browse/HDFS-7137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14302761#comment-14302761 ] Kiran Kumar M R commented on HDFS-7137: --- The fix for this issue is already given in HDFS-7667 HDFS Federation -- Adding a new Namenode to an existing HDFS cluster Document Has an Error Key: HDFS-7137 URL: https://issues.apache.org/jira/browse/HDFS-7137 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: zhangyubiao Assignee: Kiran Kumar M R Priority: Minor Labels: documentation In Document HDFS Federation -- Adding a new Namenode to an existing HDFS cluster $HADOOP_PREFIX_HOME/bin/hdfs dfadmin -refreshNameNode datanode_host_name:datanode_rpc_port should be $HADOOP_PREFIX_HOME/bin/hdfs dfsadmin -refreshNameNode datanode_host_name:datanode_rpc_port It just miss s in dfadmin -- This message was sent by Atlassian JIRA (v6.3.4#6332)