[jira] [Created] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled

2017-07-04 Thread lufei (JIRA)
lufei created HDFS-12087:


 Summary: The error message is not friendly when set a path with 
the policy not enabled
 Key: HDFS-12087
 URL: https://issues.apache.org/jira/browse/HDFS-12087
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.0.0-alpha3
Reporter: lufei
Assignee: lufei


First user add a policy by -addPolicies command but not enabled, then user set 
a path with this policy. The error message displayed as below:
{color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
enabled erasure coding policies: []. The set of enabled erasure coding policies 
can be configured at 'dfs.namenode.ec.policies.enabled'.{color}

The policy 'XOR-2-1-128k' is added by user but not be enabled.The error message 
is not promot user to enable the policy first.I think the error message may be 
better as below:
{color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
enabled erasure coding policies: []. The set of enabled erasure coding policies 
can be configured at 'dfs.namenode.ec.policies.enabled' or enable the policy by 
'-enablePolicy' EC command before.{color}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12086) Ozone: Add the unit test for KSMMetrics

2017-07-04 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12086:


 Summary: Ozone: Add the unit test for KSMMetrics
 Key: HDFS-12086
 URL: https://issues.apache.org/jira/browse/HDFS-12086
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-7240
Reporter: Yiqun Lin
Assignee: Yiqun Lin


Currently the unit test for KSMMetrics is missing. And some metrics name is 
inconsistent with that in documentation:

* numVolumeModifies should be numVolumeUpdates
* numBucketModifies should be numBucketUpdates




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-07-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/365/

[Jul 4, 2017 5:02:14 AM] (aajisaka) HDFS-12079. Description of 
dfs.block.invalidate.limit is incorrect in
[Jul 4, 2017 5:51:52 AM] (aajisaka) HDFS-12078. Add time unit to the 
description of property
[Jul 4, 2017 9:48:02 AM] (stevel) HADOOP-14615. Add 
ServiceOperations.stopQuietly that accept slf4j logger
[Jul 4, 2017 9:55:20 AM] (aajisaka) HADOOP-14571. Deprecate public APIs relate 
to log4j1
[Jul 4, 2017 10:41:07 AM] (stevel) HADOOP-14617. Add 
ReflectionUtils.logThreadInfo that accept slf4j logger




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.ipc.TestRPC 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.TestLocalDFS 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.yarn.sls.nodemanager.TestNMSimulator 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/365/artifact/out/patch-mvninstall-root.txt
  [616K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/365/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/365/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/365/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/365/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/365/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [152K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/365/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [608K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/365/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/365/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [64K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/365/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [76K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-07-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/454/

[Jul 4, 2017 5:02:14 AM] (aajisaka) HDFS-12079. Description of 
dfs.block.invalidate.limit is incorrect in
[Jul 4, 2017 5:51:52 AM] (aajisaka) HDFS-12078. Add time unit to the 
description of property




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs-client 
   Possible exposure of partially initialized object in 
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At 
DFSClient.java:object in 
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At 
DFSClient.java:[line 2888] 
   org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
SlowDiskReports.java:keySet iterator instead of entrySet iterator At 
SlowDiskReports.java:[line 105] 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to 
return value of called method Dereferenced at 
JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus()
 due to return value of called method Dereferenced at JournalNode.java:[line 
302] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String)
 unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId 
At HdfsServerConstants.java:[line 193] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int)
 unconditionally sets the field force At HdfsServerConstants.java:force At 
HdfsServerConstants.java:[line 217] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean)
 unconditionally sets the field isForceFormat At 
HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean)
 unconditionally sets the field isInteractiveFormat At 
HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 
237] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, 
int, HardLink, boolean, File, List) due to return value of called method 
Dereferenced at 
DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File,
 File, int, HardLink, boolean, File, List) due to return value of called method 
Dereferenced at DataStorage.java:[line 1339] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
 long) due to return value of called method Dereferenced at 
NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
 long) due to return value of called method Dereferenced at 
NNStorageRetentionManager.java:[line 258] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, 
BasicFileAttributes) due to return value of called method Dereferenced at 
NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path,
 BasicFileAttributes) due to return value of called method Dereferenced at 
NNUpgradeUtil.java:[line 133] 
   Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 
2085] 
   Useless condition:numBlocks == -1 at this point At 
ImageLoaderCurrent.java:[line 727] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 642] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 719] 
   Hard coded reference to an absolute pathname in 

[jira] [Created] (HDFS-12085) Reconfigure namenode interval fails if the interval was set with time unit

2017-07-04 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12085:
--

 Summary: Reconfigure namenode interval fails if the interval was 
set with time unit
 Key: HDFS-12085
 URL: https://issues.apache.org/jira/browse/HDFS-12085
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, tools
Reporter: Weiwei Yang
Assignee: Weiwei Yang
Priority: Critical


It fails when I set duration with time unit, e.g 5s, error

{noformat}
Reconfiguring status for node [localhost:8111]: started at Tue Jul 04 08:14:18 
PDT 2017 and finished at Tue Jul 04 08:14:18 PDT 2017.
FAILED: Change property dfs.heartbeat.interval
From: "3s"
To: "5s"
Error: For input string: "5s".
{noformat}

time unit support was added via HDFS-9847.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12084) Scheduled will not decrement when file is deleted before all IBR's received

2017-07-04 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-12084:
---

 Summary: Scheduled will not decrement when file is deleted before 
all IBR's received
 Key: HDFS-12084
 URL: https://issues.apache.org/jira/browse/HDFS-12084
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


When small files creation && deletion happens so frequently and DN's did not 
report blocks to NN before deletion, then scheduled size will keep on increment 
and which will not deleted as blocks are deleted.

Note: Every 20 mins,this can be rolled, but with in 20 mins, size can be more 
as so many operations.
when batchIBR enabled with committed allowed=1 this will be observed more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12083) Ozone: KSM: previous key has to be excluded from result in listVolumes, listBuckets and listKeys

2017-07-04 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12083:
-

 Summary: Ozone: KSM: previous key has to be excluded from result 
in listVolumes, listBuckets and listKeys
 Key: HDFS-12083
 URL: https://issues.apache.org/jira/browse/HDFS-12083
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nandakumar
Assignee: Nandakumar


When previous key is set as part of list calls [listVolume, listBuckets & 
listKeys], the result includes previous key, there is no need to have this in 
the result. 
Since previous key is present as part of result, we will never receive an empty 
list in the subsequent list calls, this makes it difficult to have a exit 
criteria where we want to get all the values using multiple list calls (with 
previous-key set).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12082) BlockInvalidateLimit value is incorrectly set after namenode heartbeat interval reconfigured

2017-07-04 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12082:
--

 Summary: BlockInvalidateLimit value is incorrectly set after 
namenode heartbeat interval reconfigured 
 Key: HDFS-12082
 URL: https://issues.apache.org/jira/browse/HDFS-12082
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, namenode
Reporter: Weiwei Yang
Assignee: Weiwei Yang


HDFS-1477 provides an option to reconfigured namenode heartbeat interval 
without restarting the namenode. When the heartbeat interval is reconfigured, 
{{blockInvalidateLimit}} gets recounted

{code}
 this.blockInvalidateLimit = Math.max(20 * (int) (intervalSeconds),
DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);
{code}

this doesn't honor the existing value set by {{dfs.block.invalidate.limit}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org