[ 
https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16476749#comment-16476749
 ] 

Anbang Hu commented on HDFS-13560:
----------------------------------

In startDataNode in DataNode.java:
{code:java}
if (dnConf.maxLockedMemory > 0) {
  if (!NativeIO.POSIX.getCacheManipulator().verifyCanMlock()) {
    throw new RuntimeException(String.format(
        "Cannot start datanode because the configured max locked memory" +
        " size (%s) is greater than zero and native code is not available.",
        DFS_DATANODE_MAX_LOCKED_MEMORY_KEY));
  }
  if (Path.WINDOWS) {
    NativeIO.Windows.extendWorkingSetSize(dnConf.maxLockedMemory);
  } else {
    long ulimit = NativeIO.POSIX.getCacheManipulator().getMemlockLimit();
    if (dnConf.maxLockedMemory > ulimit) {
      throw new RuntimeException(String.format(
        "Cannot start datanode because the configured max locked memory" +
        " size (%s) of %d bytes is more than the datanode's available" +
        " RLIMIT_MEMLOCK ulimit of %d bytes.",
        DFS_DATANODE_MAX_LOCKED_MEMORY_KEY,
        dnConf.maxLockedMemory,
        ulimit));
    }
  }
{code}
[~cnauroth] do you think we should add before extendWorkingSetSize for Windows 
here? Not sure what ulimit to get for Windows though.

> Insufficient system resources exist to complete the requested service for 
> some tests on Windows
> -----------------------------------------------------------------------------------------------
>
>                 Key: HDFS-13560
>                 URL: https://issues.apache.org/jira/browse/HDFS-13560
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Anbang Hu
>            Assignee: Anbang Hu
>            Priority: Major
>              Labels: Windows
>         Attachments: HDFS-13560.000.patch
>
>
> On Windows, there are 30 tests in HDFS component giving error like the 
> following:
>  {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, 
> Time elapsed: 50.149 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color}
> {color:#d04437} [ERROR] 
> testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles)
>  Time elapsed: 16.513 s <<< ERROR!{color}
> {color:#d04437} 1450: Insufficient system resources exist to complete the 
> requested service.{color}
> {color:#d04437}at 
> org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native 
> Method){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:495){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#333333}The involved tests are{color}
> {code:java}
> TestLazyPersistFiles,TestLazyPersistPolicy,TestLazyPersistReplicaRecovery,TestLazyPersistLockedMemory#testWritePipelineFailure,TestLazyPersistLockedMemory#testShortBlockFinalized,TestLazyPersistReplicaPlacement#testRamDiskNotChosenByDefault,TestLazyPersistReplicaPlacement#testFallbackToDisk,TestLazyPersistReplicaPlacement#testPlacementOnSizeLimitedRamDisk,TestLazyPersistReplicaPlacement#testPlacementOnRamDisk,TestLazyWriter#testDfsUsageCreateDelete,TestLazyWriter#testDeleteAfterPersist,TestLazyWriter#testDeleteBeforePersist,TestLazyWriter#testLazyPersistBlocksAreSaved,TestDirectoryScanner#testDeleteBlockOnTransientStorage,TestDirectoryScanner#testRetainBlockOnPersistentStorage,TestDirectoryScanner#testExceptionHandlingWhileDirectoryScan,TestDirectoryScanner#testDirectoryScanner,TestDirectoryScanner#testThrottling,TestDirectoryScanner#testDirectoryScannerInFederatedCluster,TestNameNodeMXBean#testNameNodeMXBeanInfo{code}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] 
> TestDirectoryScanner.testDeleteBlockOnTransientStorage:385 ╗ NativeIO 
> Insuffic...{color}
> {color:#d04437}[ERROR] 
> TestDirectoryScanner.testDirectoryScanner:426->runTest:431 ╗ NativeIO 
> Insuffic...{color}
> {color:#d04437}[ERROR] 
> TestDirectoryScanner.testDirectoryScannerInFederatedCluster:1026 ╗ NativeIO 
> In...{color}
> {color:#d04437}[ERROR] 
> TestDirectoryScanner.testExceptionHandlingWhileDirectoryScan:982 ╗ NativeIO 
> In...{color}
> {color:#d04437}[ERROR] 
> TestDirectoryScanner.testRetainBlockOnPersistentStorage:344 ╗ NativeIO 
> Insuffi...{color}
> {color:#d04437}[ERROR] TestDirectoryScanner.testThrottling:583 ╗ NativeIO 
> Insufficient system resourc...{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistFiles.testAppendIsDenied:51->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistFiles.testConcurrentRead:186->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistFiles.testConcurrentWrites:237->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistFiles.testCorruptFilesAreDiscarded:94->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistFiles.testDisableLazyPersistFileScrubber:128->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistFiles.testFileShouldNotDiscardedIfNNRestarted:157->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistFiles.testTruncateIsDenied:72->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistLockedMemory.testShortBlockFinalized:134->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistLockedMemory.testWritePipelineFailure:154->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistPolicy.testPolicyNotSetByDefault:37->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistPolicy.testPolicyPersistenceInEditLog:62->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistPolicy.testPolicyPersistenceInFsImage:76->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistPolicy.testPolicyPropagation:50->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistReplicaPlacement.testFallbackToDisk:72->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistReplicaPlacement.testPlacementOnRamDisk:41->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistReplicaPlacement.testPlacementOnSizeLimitedRamDisk:52->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistReplicaPlacement.testRamDiskNotChosenByDefault:163->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas:36->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyPersistReplicaRecovery.testDnRestartWithUnsavedReplicas:61->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyWriter.testDeleteAfterPersist:211->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyWriter.testDeleteBeforePersist:184->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyWriter.testDfsUsageCreateDelete:236->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] 
> TestLazyWriter.testLazyPersistBlocksAreSaved:43->LazyPersistTestCase.startUpCluster:316
>  ╗ NativeIO{color}
> {color:#d04437}[ERROR] TestNameNodeMXBean.testNameNodeMXBeanInfo:99 ╗ 
> NativeIO Insufficient system re...{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Tests run: 30, Failures: 0, Errors: 30, Skipped: 
> 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to