[jira] [Created] (HDFS-12052) Set "SWEBHDFS delegation" as DT kind if ssl is enabled in HttpFS

2017-06-27 Thread Zoran Dimitrijevic (JIRA)
Zoran Dimitrijevic created HDFS-12052:
-

 Summary: Set "SWEBHDFS delegation" as DT kind if ssl is enabled in 
HttpFS
 Key: HDFS-12052
 URL: https://issues.apache.org/jira/browse/HDFS-12052
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs, webhdfs
Affects Versions: 3.0.0-alpha3, 2.7.3
Reporter: Zoran Dimitrijevic


When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
tokens. 

Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
whether ssl is enabled or not. If clients directly connect to renew tokens (for 
example, hdfs dfs) all works because httpfs doesn't check whether token kind is 
for swebhdfs or webhdfs. However, this breaks when yarn rm needs to renew the 
token for the job (for example, when running hadoop distcp). Since DT kind is 
WEBHDFS, rm tries to establish non-ssl connection to httpfs and fails.

I've tested a simple patch which I'll upload to this jira, and it fixes this 
issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-06-27 Thread Misha Dmitriev (JIRA)
Misha Dmitriev created HDFS-12051:
-

 Summary: Intern INOdeFileAttributes$SnapshotCopy.name byte[] 
arrays to save memory
 Key: HDFS-12051
 URL: https://issues.apache.org/jira/browse/HDFS-12051
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Misha Dmitriev
Assignee: Misha Dmitriev


When snapshot diff operation is performed in a NameNode that manages several 
million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
result in 6.5% memory overhead, and most of these arrays are referenced by 
{code}org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name{code}
 and {code}org.apache.hadoop.hdfs.server.namenode.INodeFile.name{code}:

{code}
19. DUPLICATE PRIMITIVE ARRAYS

Types of duplicate objects:
 Ovhd Num objs  Num unique objs   Class name

3,220,272K (6.5%)   104749528  25760871 byte[]


  1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 of 
byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 116, 
45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 95, 
48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
... and 45902395 more arrays, of which 13158084 are unique
 <-- 
org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
<-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode <--  
{j.u.ArrayList} <-- 
org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
<-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 elements) 
... <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
 <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER

  409,830K (0.8%), 13482787 dup arrays (13260241 unique)
430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of byte[32](116, 
97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of byte[32](116, 97, 115, 107, 
95, 49, 52, 57, 55, 48, ...), 342 of byte[32](116, 97, 115, 107, 95, 49, 52, 
57, 55, 48, ...), 341 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, 
...), 341 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of byte[32](116, 
97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of byte[32](116, 97, 115, 107, 
95, 49, 52, 57, 55, 48, ...)
... and 13479257 more arrays, of which 13260231 are unique
 <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
 <-- j.l.Thread[] <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
 <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER

{code}

To eliminate this duplication and reclaim memory, we will need to write a small 
class similar to StringInterner, but designed specifically for byte[] arrays.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/

[Jun 26, 2017 5:54:01 PM] (wang) HDFS-12032. Inaccurate comment on
[Jun 26, 2017 6:20:07 PM] (wang) HDFS-11956. Do not require a storage ID or 
target storage IDs when
[Jun 26, 2017 8:24:27 PM] (raviprak) HDFS-11993. Add log info when connect to 
datanode socket address failed.
[Jun 26, 2017 10:43:50 PM] (lei) HDFS-12033. DatanodeManager picking EC 
recovery tasks should also
[Jun 27, 2017 12:35:55 AM] (rkanter) MAPREDUCE-6904. HADOOP_JOB_HISTORY_OPTS 
should be
[Jun 27, 2017 7:39:47 AM] (aengineer) HDFS-12045. Add log when Diskbalancer 
volume is transient storage type.
[Jun 27, 2017 11:49:26 AM] (aajisaka) HDFS-12040. 
TestFsDatasetImpl.testCleanShutdownOfVolume fails.




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-mvninstall-root.txt
  [500K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [488K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [76K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   

Re: Support for Erasure Coding for Hadoop 2.7

2017-06-27 Thread Erik Krogen
Erasure Coding is a pretty massive feature; I would not expect it to be
feasible at all to backport.

Erik

On Tue, Jun 27, 2017 at 11:42 AM, Rahul Shrivastava <
rshrivast...@salesforce.com> wrote:

> Hi All,
>
> I am trying to find out if erasure coding is possible for Hadoop 2.7 ( even
> with a backport). I have gone through some of the jira and content on
> google but found that current branch is with Hadoop 3.0. I have a specific
> questions
>
> 1. Is it possible to backport the EC to 2.7 ?
>
> thanks
> Rahul
>


[jira] [Created] (HDFS-12050) Ozone: StorageHandler: Implementation of "close" for releasing resources during shutdown

2017-06-27 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12050:
-

 Summary: Ozone: StorageHandler: Implementation of "close" for 
releasing resources during shutdown
 Key: HDFS-12050
 URL: https://issues.apache.org/jira/browse/HDFS-12050
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nandakumar
Assignee: Nandakumar


When we use DistributedStorageHandler and call {{newKeyWriter}}, it will create 
{{XceiverClientSpi}} and add it to clientCache which will create a non-daemon 
{{nioEventLoopGroup}} thread. Since {{XceiverClientManager#releaseClient}} 
doesn't invalidate the client, close is not called on the client object. 
Cleanup is triggered as part of both eviction as well as releaseClient, but the 
connection is closed only when both of the conditions are satisfied.

This {{StorageHandler#close}} can be used close the connections during shutdown.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Support for Erasure Coding for Hadoop 2.7

2017-06-27 Thread Rahul Shrivastava
Hi All,

I am trying to find out if erasure coding is possible for Hadoop 2.7 ( even
with a backport). I have gone through some of the jira and content on
google but found that current branch is with Hadoop 3.0. I have a specific
questions

1. Is it possible to backport the EC to 2.7 ?

thanks
Rahul


[jira] [Created] (HDFS-12049) Recommissioning live nodes stalls the NN

2017-06-27 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-12049:
--

 Summary: Recommissioning live nodes stalls the NN
 Key: HDFS-12049
 URL: https://issues.apache.org/jira/browse/HDFS-12049
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Daryn Sharp
Priority: Critical


A node refresh will recommission included nodes that are alive and in 
decommissioning or decommissioned state.  The recommission will scan all blocks 
on the node, find over replicated blocks, chose an excess, queue an invalidate.

The process is expensive and worsened by overhead of storage types (even when 
not in use).  It can be especially devastating because the write lock is held 
for the entire node refresh.  _Recommissioning 67 nodes with ~500k blocks/node 
stalled rpc services for over 4 mins._



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/

[Jun 26, 2017 7:41:00 AM] (aajisaka) HADOOP-14549. Use 
GenericTestUtils.setLogLevel when available in
[Jun 26, 2017 8:26:09 AM] (kai.zheng) HDFS-11943. [Erasure coding] Warn log 
frequently print to screen in
[Jun 26, 2017 12:39:47 PM] (stevel) HADOOP-14461 Azure: handle failure 
gracefully in case of missing account
[Jun 26, 2017 5:54:01 PM] (wang) HDFS-12032. Inaccurate comment on
[Jun 26, 2017 6:20:07 PM] (wang) HDFS-11956. Do not require a storage ID or 
target storage IDs when
[Jun 26, 2017 8:24:27 PM] (raviprak) HDFS-11993. Add log info when connect to 
datanode socket address failed.
[Jun 26, 2017 10:43:50 PM] (lei) HDFS-12033. DatanodeManager picking EC 
recovery tasks should also
[Jun 27, 2017 12:35:55 AM] (rkanter) MAPREDUCE-6904. HADOOP_JOB_HISTORY_OPTS 
should be




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 642] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 719] 
   Hard coded reference to an absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:[line 455] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 334] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 

Failed junit tests :

   hadoop.ipc.TestProtoBufRpcServerHandoff 
   hadoop.ipc.TestRPC 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport 
   hadoop.hdfs.TestFileAppend 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-compile-javac-root.txt
  [192K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/whitespace-tabs.txt
  [1.2M]

   findbugs:

   

[jira] [Created] (HDFS-12048) TestOzoneContainerRatis & TestRatisManager are failing consistently

2017-06-27 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12048:


 Summary: TestOzoneContainerRatis & TestRatisManager are failing 
consistently
 Key: HDFS-12048
 URL: https://issues.apache.org/jira/browse/HDFS-12048
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


TestOzoneContainerRatis and TestRatisManager are failing with the following 
stack trace.

{code}
testBothGetandPutSmallFileRatisGrpc(org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis)
  Time elapsed: 11.864 sec  <<< ERROR!
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1371)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:872)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
at 
org.apache.hadoop.ozone.MiniOzoneCluster.(MiniOzoneCluster.java:88)
at 
org.apache.hadoop.ozone.MiniOzoneCluster.(MiniOzoneCluster.java:67)
at 
org.apache.hadoop.ozone.MiniOzoneCluster$Builder.build(MiniOzoneCluster.java:387)
at 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis.runTest(TestOzoneContainerRatis.java:95)
at 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis.runTestBothGetandPutSmallFileRatis(TestOzoneContainerRatis.java:127)
at 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis.testBothGetandPutSmallFileRatisGrpc(TestOzoneContainerRatis.java:139)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12047) Ozone: Add REST API documentation

2017-06-27 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12047:
--

 Summary: Ozone: Add REST API documentation
 Key: HDFS-12047
 URL: https://issues.apache.org/jira/browse/HDFS-12047
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Add ozone rest api documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org