[jira] [Created] (HDFS-12068) Modify judgment condition in function computeVolumeDataDensity,otherwise maybe divided by zero

2017-06-28 Thread steven-wugang (JIRA)
steven-wugang created HDFS-12068:


 Summary: Modify judgment condition in function 
computeVolumeDataDensity,otherwise maybe divided by zero
 Key: HDFS-12068
 URL: https://issues.apache.org/jira/browse/HDFS-12068
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: steven-wugang


IN function computeVolumeDataDensity,there is a piece of code,as follow:
 public void computeVolumeDataDensity() {
 ...
 if (volume.computeEffectiveCapacity() < 0) {
  skipMisConfiguredVolume(volume);
  continue;
}

double dfsUsedRatio =
truncateDecimals(volume.getUsed() /
(double) volume.computeEffectiveCapacity());
 

}
Did not filter out the case that volume.computeEffectiveCapacity() is 
zero,maybe divided by zero behind.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12067) Add command help description about 'hdfs dfsadmin -help getVolumeReport' command.

2017-06-28 Thread steven-wugang (JIRA)
steven-wugang created HDFS-12067:


 Summary: Add command help description about 'hdfs dfsadmin -help 
getVolumeReport' command.
 Key: HDFS-12067
 URL: https://issues.apache.org/jira/browse/HDFS-12067
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: steven-wugang


When I use the command,I see the command help description,but the help 
description doesn't make it clear,especially the argument 'port',It's easy to 
mistake for port (default 9866) in 'dfs.datanode.address'.Therefore, in order 
to use this command better,I add some descriptions about the arguments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12066) When Namenode is in safemode,may not allowed to remove an user's erasure coding policy

2017-06-28 Thread lufei (JIRA)
lufei created HDFS-12066:


 Summary: When Namenode is in safemode,may not allowed to remove an 
user's erasure coding policy
 Key: HDFS-12066
 URL: https://issues.apache.org/jira/browse/HDFS-12066
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.0.0-alpha3
Reporter: lufei
Assignee: lufei


FSNamesystem#removeErasureCodingPolicy should call checkNameNodeSafeMode() to 
ensure Namenode is not in safemode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12065) Fix log format in StripedBlockReconstructor

2017-06-28 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12065:


 Summary: Fix log format in StripedBlockReconstructor
 Key: HDFS-12065
 URL: https://issues.apache.org/jira/browse/HDFS-12065
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.0.0-alpha3
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Trivial


The {{LOG}} is using wrong signature in {{StripedBlockReconstructor}}, and 
results to the following message without the stack:

{code}
Failed to reconstruct striped block: 
BP-1026491657-172.31.114.203-1498498077419:blk_-9223372036854759232_5065
java.lang.NullPointerException
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12064) Reuse object mapper in HDFS

2017-06-28 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-12064:


 Summary: Reuse object mapper in HDFS
 Key: HDFS-12064
 URL: https://issues.apache.org/jira/browse/HDFS-12064
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Mingliang Liu
Assignee: Hanisha Koneru
Priority: Minor


Currently there are a few places that are not following the recommended pattern 
of using object mapper - reuse if possible. Actually we can use 
{{ObjectReader}} or {{ObjectWriter}} to replace the object mapper in some 
places: they are straightforward and thread safe.

The benefit is all about performance, so in unit testing code I assume we don't 
have to worry too much.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12063) Ozone: Ozone shell: Multiple RPC calls for put/get key

2017-06-28 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12063:
-

 Summary: Ozone: Ozone shell: Multiple RPC calls for put/get key
 Key: HDFS-12063
 URL: https://issues.apache.org/jira/browse/HDFS-12063
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Nandakumar


With current implementation multiple RPC calls are made for each put/get key 
ozone shell call

{code:title=org.apache.hadoop.ozone.web.ozShell.keys.PutKeyHandler#execute}
OzoneVolume vol = client.getVolume(volumeName);
OzoneBucket bucket = vol.getBucket(bucketName);
bucket.putKey(keyName, dataFile);
{code}

{code:title=org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler#execute}
OzoneVolume vol = client.getVolume(volumeName);
OzoneBucket bucket = vol.getBucket(bucketName);
bucket.getKey(keyName, dataFilePath);
{code}

This can be optimized.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/359/

[Jun 27, 2017 2:09:03 PM] (stevel) HADOOP-14536. Update azure-storage sdk to 
version 5.3.0 Contributed by
[Jun 27, 2017 8:19:14 PM] (liuml07) HADOOP-14594. 
ITestS3AFileOperationCost::testFakeDirectoryDeletion to
[Jun 27, 2017 10:12:42 PM] (jlowe) YARN-6738. LevelDBCacheTimelineStore should 
reuse ObjectMapper
[Jun 27, 2017 11:48:47 PM] (liuml07) HADOOP-14573. regression: Azure tests 
which capture logs failing with
[Jun 28, 2017 12:32:07 AM] (liuml07) HADOOP-14546. Azure: Concurrent I/O does 
not work when secure.mode is
[Jun 28, 2017 1:50:09 AM] (aajisaka) MAPREDUCE-6697. Concurrent task limits 
should only be applied when
[Jun 28, 2017 6:49:09 AM] (xiao) HADOOP-14515. Addendum. Specifically configure 
zookeeper-related log
[Jun 28, 2017 9:22:13 AM] (stevel) HADOOP-14190. Add more on S3 regions to the 
s3a documentation.




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.hdfs.TestEncryptedTransfer 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.yarn.sls.TestSLSRunner 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/359/artifact/out/patch-mvninstall-root.txt
  [504K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/359/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/359/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/359/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/359/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/359/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/359/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [364K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/359/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/359/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/359/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [76K]
   

[jira] [Created] (HDFS-12062) removeErasureCodingPolicy needs super user permission

2017-06-28 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12062:
--

 Summary: removeErasureCodingPolicy needs super user permission
 Key: HDFS-12062
 URL: https://issues.apache.org/jira/browse/HDFS-12062
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Critical


Currently {{NameNodeRPCServer#removeErasureCodingPolicy}} does not require 
super user permission. This is not appropriate as 
{{NameNodeRPCServer#addErasureCodingPolicies}} requires super user permission.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12061) Add TraceScope for multiple DFSClient EC operations

2017-06-28 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12061:
--

 Summary: Add TraceScope for multiple DFSClient EC operations
 Key: HDFS-12061
 URL: https://issues.apache.org/jira/browse/HDFS-12061
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.0.0-alpha4
Reporter: Wei-Chiu Chuang
Priority: Minor


A number of DFSClient EC operations, including addErasureCodingPolicies, 
removeErasureCodingPolicy, enableErasureCodingPolicy, 
disableErasureCodingPolicy does not have TraceScope similar to this:
{code}
try (TraceScope ignored = tracer.newScope("getErasureCodingCodecs")) {
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/448/

[Jun 27, 2017 7:39:47 AM] (aengineer) HDFS-12045. Add log when Diskbalancer 
volume is transient storage type.
[Jun 27, 2017 11:49:26 AM] (aajisaka) HDFS-12040. 
TestFsDatasetImpl.testCleanShutdownOfVolume fails.
[Jun 27, 2017 2:09:03 PM] (stevel) HADOOP-14536. Update azure-storage sdk to 
version 5.3.0 Contributed by
[Jun 27, 2017 8:19:14 PM] (liuml07) HADOOP-14594. 
ITestS3AFileOperationCost::testFakeDirectoryDeletion to
[Jun 27, 2017 10:12:42 PM] (jlowe) YARN-6738. LevelDBCacheTimelineStore should 
reuse ObjectMapper
[Jun 27, 2017 11:48:47 PM] (liuml07) HADOOP-14573. regression: Azure tests 
which capture logs failing with
[Jun 28, 2017 12:32:07 AM] (liuml07) HADOOP-14546. Azure: Concurrent I/O does 
not work when secure.mode is
[Jun 28, 2017 1:50:09 AM] (aajisaka) MAPREDUCE-6697. Concurrent task limits 
should only be applied when




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 642] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 719] 
   Hard coded reference to an absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:[line 455] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 334] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.TestFileCorruption 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestDiskFailures 

Timed out junit tests :

   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   
org.apache.hadoop.yarn.client.api.impl.TestOpportunisticContainerAllocation 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/448/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/448/artifact/out/diff-compile-javac-root.txt
  [192K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/448/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/448/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/448/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/448/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/448/artifact/out/whitespace-eol.txt
  [12M]
   

[jira] [Created] (HDFS-12060) Ozone: OzoneClient: Add list calls

2017-06-28 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12060:
-

 Summary: Ozone: OzoneClient: Add list calls
 Key: HDFS-12060
 URL: https://issues.apache.org/jira/browse/HDFS-12060
 Project: Hadoop HDFS
  Issue Type: Sub-task
 Environment: Support for {{listVolumes}}, {{listBuckets}}, 
{{listKeys}} in {{OzoneClient}}
Reporter: Nandakumar
Assignee: Nandakumar






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12059) Ozone: OzoneClient: Add delete calls

2017-06-28 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12059:
-

 Summary: Ozone: OzoneClient: Add delete calls
 Key: HDFS-12059
 URL: https://issues.apache.org/jira/browse/HDFS-12059
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Nandakumar


Support for {{deleteVolume}}, {{deleteBucket}}, {{deleteKey}} in {{OzoneClient}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12058) Ozone: OzoneClient: Add getInfo calls

2017-06-28 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12058:
-

 Summary: Ozone: OzoneClient: Add getInfo calls
 Key: HDFS-12058
 URL: https://issues.apache.org/jira/browse/HDFS-12058
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Nandakumar
Assignee: Nandakumar


Support for {{getVolumeInfo}}, {{getBucketInfo}}, {{getKeyInfo}} in 
{{OzoneClient}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12057) Ozone: OzoneClient: Implementation of OzoneClient

2017-06-28 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12057:
-

 Summary: Ozone: OzoneClient: Implementation of OzoneClient
 Key: HDFS-12057
 URL: https://issues.apache.org/jira/browse/HDFS-12057
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Nandakumar


OzoneClient brings in support for client facing Java APIs in Ozone to access 
ObjectStore. It should supports all the calls which are supported by 
{{OzoneHandler}} through REST. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12055) remove all unused import

2017-06-28 Thread chengbei (JIRA)
chengbei created HDFS-12055:
---

 Summary: remove all unused import
 Key: HDFS-12055
 URL: https://issues.apache.org/jira/browse/HDFS-12055
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: chengbei


remove all unused imports are redundant



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12054) FSNamesystem#addECPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-06-28 Thread lufei (JIRA)
lufei created HDFS-12054:


 Summary: FSNamesystem#addECPolicies should call 
checkNameNodeSafeMode() to ensure Namenode is not in safemode
 Key: HDFS-12054
 URL: https://issues.apache.org/jira/browse/HDFS-12054
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.0.0-alpha3
Reporter: lufei
Assignee: lufei


In the process of FSNamesystem#addECPolicies, it would be better to  call 
checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12053) Ozone: ozone server should create missing metadata directory if it has permission to

2017-06-28 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12053:
--

 Summary: Ozone: ozone server should create missing metadata 
directory if it has permission to
 Key: HDFS-12053
 URL: https://issues.apache.org/jira/browse/HDFS-12053
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang
Assignee: Weiwei Yang
Priority: Minor


Datanode state machine right now simple fails if container metadata directory 
is missing, it is better to create the directory if it has permission to. This 
is extremely useful at a fresh setup, usually we set 
{{ozone.container.metadata.dirs}} to be under same parent of 
{{dfs.datanode.data.dir}}. E.g

* /hadoop/hdfs/data
* /hadoop/hdfs/scm

if I don't pre-setup /hadoop/hdfs/scm/repository, ozone could not be started.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org