[jira] [Created] (HDFS-5560) Trash configuration log statements prints incorrect units

2013-11-24 Thread Josh Elser (JIRA)
Josh Elser created HDFS-5560:


 Summary: Trash configuration log statements prints incorrect units
 Key: HDFS-5560
 URL: https://issues.apache.org/jira/browse/HDFS-5560
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Josh Elser


I ran `hdfs dfs -expunge` on a 2.2.0 system, and noticed the following the 
message printed out on the console:

{noformat}
$ hdfs dfs -expunge
13/11/23 22:12:17 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 180 minutes, Emptier interval = 0 minutes.
{noformat}

The configuration for both the deletion interval and emptier interval are given 
in minutes, converted to milliseconds and then logged as milliseconds but with 
a label of minutes. It looks like this was introduced in HDFS-4903.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5560) Trash configuration log statements prints incorrect units

2013-11-24 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HDFS-5560:
-

Target Version/s: 3.0.0, 2.2.1
  Status: Patch Available  (was: Open)

Patch currently applies against branch-2.2 and trunk

 Trash configuration log statements prints incorrect units
 -

 Key: HDFS-5560
 URL: https://issues.apache.org/jira/browse/HDFS-5560
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Josh Elser

 I ran `hdfs dfs -expunge` on a 2.2.0 system, and noticed the following the 
 message printed out on the console:
 {noformat}
 $ hdfs dfs -expunge
 13/11/23 22:12:17 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 180 minutes, Emptier interval = 0 minutes.
 {noformat}
 The configuration for both the deletion interval and emptier interval are 
 given in minutes, converted to milliseconds and then logged as milliseconds 
 but with a label of minutes. It looks like this was introduced in HDFS-4903.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5560) Trash configuration log statements prints incorrect units

2013-11-24 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HDFS-5560:
-

Attachment: HDFS-5560.patch

 Trash configuration log statements prints incorrect units
 -

 Key: HDFS-5560
 URL: https://issues.apache.org/jira/browse/HDFS-5560
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Josh Elser
 Attachments: HDFS-5560.patch


 I ran `hdfs dfs -expunge` on a 2.2.0 system, and noticed the following the 
 message printed out on the console:
 {noformat}
 $ hdfs dfs -expunge
 13/11/23 22:12:17 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 180 minutes, Emptier interval = 0 minutes.
 {noformat}
 The configuration for both the deletion interval and emptier interval are 
 given in minutes, converted to milliseconds and then logged as milliseconds 
 but with a label of minutes. It looks like this was introduced in HDFS-4903.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5560) Trash configuration log statements prints incorrect units

2013-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831009#comment-13831009
 ] 

Hadoop QA commented on HDFS-5560:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615512/HDFS-5560.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5557//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5557//console

This message is automatically generated.

 Trash configuration log statements prints incorrect units
 -

 Key: HDFS-5560
 URL: https://issues.apache.org/jira/browse/HDFS-5560
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Josh Elser
 Attachments: HDFS-5560.patch


 I ran `hdfs dfs -expunge` on a 2.2.0 system, and noticed the following the 
 message printed out on the console:
 {noformat}
 $ hdfs dfs -expunge
 13/11/23 22:12:17 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 180 minutes, Emptier interval = 0 minutes.
 {noformat}
 The configuration for both the deletion interval and emptier interval are 
 given in minutes, converted to milliseconds and then logged as milliseconds 
 but with a label of minutes. It looks like this was introduced in HDFS-4903.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4600) HDFS file append failing in multinode cluster

2013-11-24 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831029#comment-13831029
 ] 

Uma Maheswara Rao G commented on HDFS-4600:
---

Hi Roman, Do you still think something needs to be addressed here? or we can 
close this bug?

 HDFS file append failing in multinode cluster
 -

 Key: HDFS-4600
 URL: https://issues.apache.org/jira/browse/HDFS-4600
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Priority: Minor
 Attachments: X.java, core-site.xml, hdfs-site.xml


 NOTE: the following only happens in a fully distributed setup (core-site.xml 
 and hdfs-site.xml are attached)
 Steps to reproduce:
 {noformat}
 $ javac -cp /usr/lib/hadoop/client/\* X.java
 $ echo a  a.txt
 $ hadoop fs -ls /tmp/a.txt
 ls: `/tmp/a.txt': No such file or directory
 $ HADOOP_CLASSPATH=`pwd` hadoop X /tmp/a.txt
 13/03/13 16:05:14 WARN hdfs.DFSClient: DataStreamer Exception
 java.io.IOException: Failed to replace a bad datanode on the existing 
 pipeline due to no more good datanodes being available to try. (Nodes: 
 current=[10.10.37.16:50010, 10.80.134.126:50010], 
 original=[10.10.37.16:50010, 10.80.134.126:50010]). The current failed 
 datanode replacement policy is DEFAULT, and a client may configure this via 
 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
 configuration.
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:793)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:858)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:964)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:470)
 Exception in thread main java.io.IOException: Failed to replace a bad 
 datanode on the existing pipeline due to no more good datanodes being 
 available to try. (Nodes: current=[10.10.37.16:50010, 10.80.134.126:50010], 
 original=[10.10.37.16:50010, 10.80.134.126:50010]). The current failed 
 datanode replacement policy is DEFAULT, and a client may configure this via 
 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
 configuration.
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:793)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:858)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:964)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:470)
 13/03/13 16:05:14 ERROR hdfs.DFSClient: Failed to close file /tmp/a.txt
 java.io.IOException: Failed to replace a bad datanode on the existing 
 pipeline due to no more good datanodes being available to try. (Nodes: 
 current=[10.10.37.16:50010, 10.80.134.126:50010], 
 original=[10.10.37.16:50010, 10.80.134.126:50010]). The current failed 
 datanode replacement policy is DEFAULT, and a client may configure this via 
 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
 configuration.
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:793)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:858)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:964)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:470)
 {noformat}
 Given that the file actually does get created:
 {noformat}
 $ hadoop fs -ls /tmp/a.txt
 Found 1 items
 -rw-r--r--   3 root hadoop  6 2013-03-13 16:05 /tmp/a.txt
 {noformat}
 this feels like a regression in APPEND's functionality.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-2832:


Attachment: h2832_20131124.patch

 Enable support for heterogeneous storages in HDFS
 -

 Key: HDFS-2832
 URL: https://issues.apache.org/jira/browse/HDFS-2832
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.24.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
 editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
 h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
 h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
 h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
 h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
 h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
 h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
 h2832_20131121.patch, h2832_20131122.patch, h2832_20131122b.patch, 
 h2832_20131123.patch, h2832_20131124.patch


 HDFS currently supports configuration where storages are a list of 
 directories. Typically each of these directories correspond to a volume with 
 its own file system. All these directories are homogeneous and therefore 
 identified as a single storage at the namenode. I propose, change to the 
 current model where Datanode * is a * storage, to Datanode * is a collection 
 * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5561) New Web UI cannot display correctly

2013-11-24 Thread Fengdong Yu (JIRA)
Fengdong Yu created HDFS-5561:
-

 Summary: New Web UI cannot display correctly
 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.2.0, 3.0.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor


the new web UI cannot display correctly, I attached the screen shot.

I've tried on Chrome31.0.1650, Firefox 25.0.1.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5561) New Web UI cannot display correctly

2013-11-24 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu updated HDFS-5561:
--

Attachment: NNUI.PNG

 New Web UI cannot display correctly
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: NNUI.PNG


 the new web UI cannot display correctly, I attached the screen shot.
 I've tried on Chrome31.0.1650, Firefox 25.0.1.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831144#comment-13831144
 ] 

Hadoop QA commented on HDFS-2832:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615521/h2832_20131124.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 48 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5558//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5558//console

This message is automatically generated.

 Enable support for heterogeneous storages in HDFS
 -

 Key: HDFS-2832
 URL: https://issues.apache.org/jira/browse/HDFS-2832
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.24.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
 editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
 h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
 h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
 h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
 h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
 h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
 h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
 h2832_20131121.patch, h2832_20131122.patch, h2832_20131122b.patch, 
 h2832_20131123.patch, h2832_20131124.patch


 HDFS currently supports configuration where storages are a list of 
 directories. Typically each of these directories correspond to a volume with 
 its own file system. All these directories are homogeneous and therefore 
 identified as a single storage at the namenode. I propose, change to the 
 current model where Datanode * is a * storage, to Datanode * is a collection 
 * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5560) Trash configuration log statements prints incorrect units

2013-11-24 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831151#comment-13831151
 ] 

Vinay commented on HDFS-5560:
-

Thanks Josh for posting patch.
+1, Patch looks good

 Trash configuration log statements prints incorrect units
 -

 Key: HDFS-5560
 URL: https://issues.apache.org/jira/browse/HDFS-5560
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Josh Elser
 Attachments: HDFS-5560.patch


 I ran `hdfs dfs -expunge` on a 2.2.0 system, and noticed the following the 
 message printed out on the console:
 {noformat}
 $ hdfs dfs -expunge
 13/11/23 22:12:17 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 180 minutes, Emptier interval = 0 minutes.
 {noformat}
 The configuration for both the deletion interval and emptier interval are 
 given in minutes, converted to milliseconds and then logged as milliseconds 
 but with a label of minutes. It looks like this was introduced in HDFS-4903.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5562) TestCacheDirectives fails on trunk

2013-11-24 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-5562:
---

 Summary: TestCacheDirectives fails on trunk
 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA


Some tests fail on trunk.
{code}
Tests in error:
  TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
datan...
  TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 » 
Runtime
  TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime Cannot 
...
  TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start datanode 
...

Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
{code}

For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5562) TestCacheDirectives fails on trunk

2013-11-24 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831233#comment-13831233
 ] 

Binglin Chang commented on HDFS-5562:
-

Looks like the native library is not loaded properly somehow in bulid env, so 
the test need native library will fail

{code}
Stacktrace

java.lang.RuntimeException: Cannot start datanode because the configured max 
locked memory size (dfs.datanode.max.locked.memory) is greater than zero and 
native code is not available.
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:668)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:267)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1764)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1679)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1191)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:666)
at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:335)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:317)
at 
org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.setUp(TestFsDatasetCache.java:113)
Standard Output

2013-11-24 11:56:46,893 WARN  util.NativeCodeLoader 
(NativeCodeLoader.java:clinit(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
{code}

Those tests should be skipped if native library is not available. 

 TestCacheDirectives fails on trunk
 --

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA

 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5562) TestCacheDirectives fails on trunk

2013-11-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831242#comment-13831242
 ] 

Akira AJISAKA commented on HDFS-5562:
-

Thanks for your comment!

bq. java.lang.RuntimeException: Cannot start datanode because the configured 
max locked memory size (dfs.datanode.max.locked.memory) is greater than zero 
and native code is not available.

The Exception occurred because max locked memory size is set to 16384 by line 
698 of TestCacheDirectives.java.

{code}
conf.setLong(DFS_DATANODE_MAX_LOCKED_MEMORY_KEY, CACHE_CAPACITY);
{code}

IMO, the tests are to be skipped If native code is not available.

 TestCacheDirectives fails on trunk
 --

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA

 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)