[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13831242#comment-13831242
 ] 

Akira AJISAKA commented on HDFS-5562:
-------------------------------------

Thanks for your comment!

bq. java.lang.RuntimeException: Cannot start datanode because the configured 
max locked memory size (dfs.datanode.max.locked.memory) is greater than zero 
and native code is not available.

The Exception occurred because max locked memory size is set to 16384 by line 
698 of TestCacheDirectives.java.

{code}
    conf.setLong(DFS_DATANODE_MAX_LOCKED_MEMORY_KEY, CACHE_CAPACITY);
{code}

IMO, the tests are to be skipped If native code is not available.

> TestCacheDirectives fails on trunk
> ----------------------------------
>
>                 Key: HDFS-5562
>                 URL: https://issues.apache.org/jira/browse/HDFS-5562
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 3.0.0
>            Reporter: Akira AJISAKA
>
> Some tests fail on trunk.
> {code}
> Tests in error:
>   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
> datan...
>   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
> » Runtime
>   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
> Cannot ...
>   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
> datanode ...
> Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
> {code}
> For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to