[ 
https://issues.apache.org/jira/browse/HDFS-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tibor Kiss updated HDFS-10354:
------------------------------
    Description: 
Compilation fails with multiple errors on Mac OS X.
Unit test test_test_libhdfs_zerocopy_hdfs_static also fails to execute on OS X.

Compile error 1:
{noformat}
     [exec] Scanning dependencies of target common_obj
     [exec] [ 45%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/base64.cc.o
     [exec] [ 45%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/status.cc.o
     [exec] [ 46%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/sasl_digest_md5.cc.o
     [exec] [ 46%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/hdfs_public_api.cc.o
     [exec] [ 47%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/options.cc.o
     [exec] [ 48%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/configuration.cc:85:12:
 error: no viable conversion from 'optional<long>' to 'optional<long long>'
     [exec]     return result;
     [exec]            ^~~~~~
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:427:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional<long>' to 'std::experimental::nullopt_t' for 1st 
argument
     [exec]   constexpr optional(nullopt_t) noexcept : OptionalBase<T>() {};
     [exec]             ^
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:429:3:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional<long>' to 'const std::experimental::optional<long 
long> &' for 1st argument
     [exec]   optional(const optional& rhs)
     [exec]   ^
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:438:3:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional<long>' to 'std::experimental::optional<long long> 
&&' for 1st argument
     [exec]   optional(optional&& rhs) 
noexcept(is_nothrow_move_constructible<T>::value)
     [exec]   ^
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:447:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional<long>' to 'const long long &' for 1st argument
     [exec]   constexpr optional(const T& v) : OptionalBase<T>(v) {}
     [exec]             ^
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:449:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional<long>' to 'long long &&' for 1st argument
     [exec]   constexpr optional(T&& v) : OptionalBase<T>(constexpr_move(v)) {}
     [exec]             ^
     [exec] 1 error generated.
     [exec] make[2]: *** 
[main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o] 
Error 1
     [exec] make[1]: *** 
[main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/all] Error 2
     [exec] make: *** [all] Error 2
{noformat}

Compile error 2:
{noformat}
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/fs/filesystem.cc:285:66:
 error: use of overloaded operator '<<' is ambiguous (with operand types 
'hdfs::LogMessage' and 'size_type' (aka 'unsigned long'))
     [exec]                                   << " Existing thread count = " << 
worker_threads_.size());
     [exec]                                   
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~
{noformat}

There is an addition compile failure in native client related to thread_local.
The complexity of the error mandates to track that issue in a [separate 
ticket|https://issues.apache.org/jira/browse/HDFS-10355].
{noformat}
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc:66:1:
 error: thread-local storage is not supported for the current target
     [exec] thread_local std::string errstr;
     [exec] ^
     [exec] 1 warning and 1 error generated.
     [exec] make[2]: *** 
[main/native/libhdfspp/lib/bindings/c/CMakeFiles/bindings_c_obj.dir/hdfs.cc.o] 
Error 1
     [exec] make[1]: *** 
[main/native/libhdfspp/lib/bindings/c/CMakeFiles/bindings_c_obj.dir/all] Error 2
     [exec] make: *** [all] Error 2
{noformat}


Unit test failure:
{noformat}
     [exec]  2/16 Test  #2: test_test_libhdfs_zerocopy_hdfs_static 
..........***Failed    2.07 sec
     [exec] log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
     [exec] log4j:WARN Please initialize the log4j system properly.
     [exec] log4j:WARN See 
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
     [exec] Formatting using clusterid: testClusterID
     [exec] nmdCreate: Builder#build error:
     [exec] java.lang.RuntimeException: Although a UNIX domain socket path is 
configured as 
/var/folders/g1/r_lb7cr10d9951ff9yrtvq9w0000gp/T//native_mini_dfs.sock.87990.282475249,
 we cannot start a localDataXceiverServer because libhadoop cannot be loaded.
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:1017)
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:988)
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1196)
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:467)
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2486)
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2374)
     [exec]     at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592)
     [exec]     at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:844)
     [exec]     at 
org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
     [exec]     at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
     [exec] TEST_ERROR: failed on 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_zerocopy.c:254
 (errno: 316): got NULL from cl
{noformat}

  was:
Compilation fails with multiple errors on Mac OS X.
Unit test test_test_libhdfs_zerocopy_hdfs_static also fails to execute on OS X.

Compile error 1:
{noformat}
     [exec] Scanning dependencies of target common_obj
     [exec] [ 45%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/base64.cc.o
     [exec] [ 45%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/status.cc.o
     [exec] [ 46%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/sasl_digest_md5.cc.o
     [exec] [ 46%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/hdfs_public_api.cc.o
     [exec] [ 47%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/options.cc.o
     [exec] [ 48%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/configuration.cc:85:12:
 error: no viable conversion from 'optional<long>' to 'optional<long long>'
     [exec]     return result;
     [exec]            ^~~~~~
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:427:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional<long>' to 'std::experimental::nullopt_t' for 1st 
argument
     [exec]   constexpr optional(nullopt_t) noexcept : OptionalBase<T>() {};
     [exec]             ^
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:429:3:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional<long>' to 'const std::experimental::optional<long 
long> &' for 1st argument
     [exec]   optional(const optional& rhs)
     [exec]   ^
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:438:3:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional<long>' to 'std::experimental::optional<long long> 
&&' for 1st argument
     [exec]   optional(optional&& rhs) 
noexcept(is_nothrow_move_constructible<T>::value)
     [exec]   ^
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:447:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional<long>' to 'const long long &' for 1st argument
     [exec]   constexpr optional(const T& v) : OptionalBase<T>(v) {}
     [exec]             ^
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:449:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional<long>' to 'long long &&' for 1st argument
     [exec]   constexpr optional(T&& v) : OptionalBase<T>(constexpr_move(v)) {}
     [exec]             ^
     [exec] 1 error generated.
     [exec] make[2]: *** 
[main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o] 
Error 1
     [exec] make[1]: *** 
[main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/all] Error 2
     [exec] make: *** [all] Error 2
{noformat}

Compile error 2:
{noformat}
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/fs/filesystem.cc:285:66:
 error: use of overloaded operator '<<' is ambiguous (with operand types 
'hdfs::LogMessage' and 'size_type' (aka 'unsigned long'))
     [exec]                                   << " Existing thread count = " << 
worker_threads_.size());
     [exec]                                   
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~
{noformat}

There is an addition compile failure in native client related to thread_local.
The complexity of the error mandates to track that issue in a separate ticket.
{noformat}
     [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc:66:1:
 error: thread-local storage is not supported for the current target
     [exec] thread_local std::string errstr;
     [exec] ^
     [exec] 1 warning and 1 error generated.
     [exec] make[2]: *** 
[main/native/libhdfspp/lib/bindings/c/CMakeFiles/bindings_c_obj.dir/hdfs.cc.o] 
Error 1
     [exec] make[1]: *** 
[main/native/libhdfspp/lib/bindings/c/CMakeFiles/bindings_c_obj.dir/all] Error 2
     [exec] make: *** [all] Error 2
{noformat}


Unit test failure:
{noformat}
     [exec]  2/16 Test  #2: test_test_libhdfs_zerocopy_hdfs_static 
..........***Failed    2.07 sec
     [exec] log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
     [exec] log4j:WARN Please initialize the log4j system properly.
     [exec] log4j:WARN See 
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
     [exec] Formatting using clusterid: testClusterID
     [exec] nmdCreate: Builder#build error:
     [exec] java.lang.RuntimeException: Although a UNIX domain socket path is 
configured as 
/var/folders/g1/r_lb7cr10d9951ff9yrtvq9w0000gp/T//native_mini_dfs.sock.87990.282475249,
 we cannot start a localDataXceiverServer because libhadoop cannot be loaded.
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:1017)
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:988)
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1196)
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:467)
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2486)
     [exec]     at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2374)
     [exec]     at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592)
     [exec]     at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:844)
     [exec]     at 
org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
     [exec]     at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
     [exec] TEST_ERROR: failed on 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_zerocopy.c:254
 (errno: 316): got NULL from cl
{noformat}


> Fix compilation & unit test issues on Mac OS X with clang compiler
> ------------------------------------------------------------------
>
>                 Key: HDFS-10354
>                 URL: https://issues.apache.org/jira/browse/HDFS-10354
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>         Environment: OS X: 10.11
> clang: Apple LLVM version 7.0.2 (clang-700.1.81)
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>         Attachments: HDFS-10354.HDFS-8707.001.patch, 
> HDFS-10354.HDFS-8707.002.patch
>
>
> Compilation fails with multiple errors on Mac OS X.
> Unit test test_test_libhdfs_zerocopy_hdfs_static also fails to execute on OS 
> X.
> Compile error 1:
> {noformat}
>      [exec] Scanning dependencies of target common_obj
>      [exec] [ 45%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/base64.cc.o
>      [exec] [ 45%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/status.cc.o
>      [exec] [ 46%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/sasl_digest_md5.cc.o
>      [exec] [ 46%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/hdfs_public_api.cc.o
>      [exec] [ 47%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/options.cc.o
>      [exec] [ 48%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o
>      [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/configuration.cc:85:12:
>  error: no viable conversion from 'optional<long>' to 'optional<long long>'
>      [exec]     return result;
>      [exec]            ^~~~~~
>      [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:427:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional<long>' to 'std::experimental::nullopt_t' for 1st 
> argument
>      [exec]   constexpr optional(nullopt_t) noexcept : OptionalBase<T>() {};
>      [exec]             ^
>      [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:429:3:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional<long>' to 'const 
> std::experimental::optional<long long> &' for 1st argument
>      [exec]   optional(const optional& rhs)
>      [exec]   ^
>      [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:438:3:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional<long>' to 'std::experimental::optional<long 
> long> &&' for 1st argument
>      [exec]   optional(optional&& rhs) 
> noexcept(is_nothrow_move_constructible<T>::value)
>      [exec]   ^
>      [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:447:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional<long>' to 'const long long &' for 1st argument
>      [exec]   constexpr optional(const T& v) : OptionalBase<T>(v) {}
>      [exec]             ^
>      [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:449:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional<long>' to 'long long &&' for 1st argument
>      [exec]   constexpr optional(T&& v) : OptionalBase<T>(constexpr_move(v)) 
> {}
>      [exec]             ^
>      [exec] 1 error generated.
>      [exec] make[2]: *** 
> [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o]
>  Error 1
>      [exec] make[1]: *** 
> [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/all] Error 2
>      [exec] make: *** [all] Error 2
> {noformat}
> Compile error 2:
> {noformat}
>      [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/fs/filesystem.cc:285:66:
>  error: use of overloaded operator '<<' is ambiguous (with operand types 
> 'hdfs::LogMessage' and 'size_type' (aka 'unsigned long'))
>      [exec]                                   << " Existing thread count = " 
> << worker_threads_.size());
>      [exec]                                   
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~
> {noformat}
> There is an addition compile failure in native client related to thread_local.
> The complexity of the error mandates to track that issue in a [separate 
> ticket|https://issues.apache.org/jira/browse/HDFS-10355].
> {noformat}
>      [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc:66:1:
>  error: thread-local storage is not supported for the current target
>      [exec] thread_local std::string errstr;
>      [exec] ^
>      [exec] 1 warning and 1 error generated.
>      [exec] make[2]: *** 
> [main/native/libhdfspp/lib/bindings/c/CMakeFiles/bindings_c_obj.dir/hdfs.cc.o]
>  Error 1
>      [exec] make[1]: *** 
> [main/native/libhdfspp/lib/bindings/c/CMakeFiles/bindings_c_obj.dir/all] 
> Error 2
>      [exec] make: *** [all] Error 2
> {noformat}
> Unit test failure:
> {noformat}
>      [exec]  2/16 Test  #2: test_test_libhdfs_zerocopy_hdfs_static 
> ..........***Failed    2.07 sec
>      [exec] log4j:WARN No appenders could be found for logger 
> (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
>      [exec] log4j:WARN Please initialize the log4j system properly.
>      [exec] log4j:WARN See 
> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>      [exec] Formatting using clusterid: testClusterID
>      [exec] nmdCreate: Builder#build error:
>      [exec] java.lang.RuntimeException: Although a UNIX domain socket path is 
> configured as 
> /var/folders/g1/r_lb7cr10d9951ff9yrtvq9w0000gp/T//native_mini_dfs.sock.87990.282475249,
>  we cannot start a localDataXceiverServer because libhadoop cannot be loaded.
>      [exec]   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:1017)
>      [exec]   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:988)
>      [exec]   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1196)
>      [exec]   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:467)
>      [exec]   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2486)
>      [exec]   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2374)
>      [exec]   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592)
>      [exec]   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:844)
>      [exec]   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
>      [exec]   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
>      [exec] TEST_ERROR: failed on 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_zerocopy.c:254
>  (errno: 316): got NULL from cl
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to