[jira] [Reopened] (HDFS-12177) NameNode exits due to setting BlockPlacementPolicy loglevel to Debug

2017-07-20 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  reopened HDFS-12177:
--

> NameNode exits due to  setting BlockPlacementPolicy loglevel to Debug
> -
>
> Key: HDFS-12177
> URL: https://issues.apache.org/jira/browse/HDFS-12177
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.1
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
> Attachments: HDFS_9668_1.patch
>
>
> NameNode exits because the ReplicationMonitor thread internally throws NPE.
> The reason for throwing NPE is that the builder field is not initialized whe 
> do log.
> Solution: before appending it should determine whether the builder is null
> {code:java}
> if (LOG.isDebugEnabled()) {
>   builder = debugLoggingBuilder.get();
>   builder.setLength(0);
>   builder.append("[");
> }
> some other codes ...
> if (LOG.isDebugEnabled()) {
>   builder.append("\nNode ").append(NodeBase.getPath(chosenNode))
>   .append(" [");
> }
> some other codes ...
> if (LOG.isDebugEnabled()) {
>   builder.append("\n]");
> }
> {code}
> NN exception log is :
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:722)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:689)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseFromNextRack(BlockPlacementPolicyDefault.java:640)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:608)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:483)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:390)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:419)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:266)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:119)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3768)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3720)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12177) NameNode exits due to setting BlockPlacementPolicy loglevel to Debug

2017-07-20 Thread Jiandan Yang (JIRA)
Jiandan Yang  created HDFS-12177:


 Summary: NameNode exits due to  setting BlockPlacementPolicy 
loglevel to Debug
 Key: HDFS-12177
 URL: https://issues.apache.org/jira/browse/HDFS-12177
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: block placement
Affects Versions: 2.8.1
Reporter: Jiandan Yang 


NameNode exits because the ReplicationMonitor thread internally throws NPE.
The reason for throwing NPE is that the builder field is not initialized whe do 
log.
Solution: before appending it should determine whether the builder is null

{code:java}
if (LOG.isDebugEnabled()) {
  builder = debugLoggingBuilder.get();
  builder.setLength(0);
  builder.append("[");
}
some other codes ...
if (LOG.isDebugEnabled()) {
  builder.append("\nNode ").append(NodeBase.getPath(chosenNode))
  .append(" [");
}
some other codes ...
if (LOG.isDebugEnabled()) {
  builder.append("\n]");
}
{code}

NN exception log is :

{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:722)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:689)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseFromNextRack(BlockPlacementPolicyDefault.java:640)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:608)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:483)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:390)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:419)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:266)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:119)
at 
org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3768)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3720)
at java.lang.Thread.run(Thread.java:834)

{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



RE: Pre-Commit build is failing

2017-07-20 Thread Brahma Reddy Battula
Looks this problem is in only branc-2.7..


--Brahma Reddy Battula

From: Brahma Reddy Battula
Sent: 21 July 2017 09:36
To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org
Subject: Pre-Commit build is failing
Importance: High

Looks pre-commit build is failing with following error.


/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-a444ed1/precommit/core.d/00-yetuslib.sh:
 line 87: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/patch-dryrun.log:
 No such file or directory
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-a444ed1/precommit/core.d/00-yetuslib.sh:
 line 98: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/patch-dryrun.log:
 No such file or directory
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-a444ed1/precommit/core.d/00-yetuslib.sh:
 line 87: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/patch-dryrun.log:
 No such file or directory
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-a444ed1/precommit/core.d/00-yetuslib.sh:
 line 98: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/patch-dryrun.log:
 No such file or directory



Reference :

https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-HDFS-Build/20362/console




--Brahma Reddy Battula



Pre-Commit build is failing

2017-07-20 Thread Brahma Reddy Battula
Looks pre-commit build is failing with following error.


/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-a444ed1/precommit/core.d/00-yetuslib.sh:
 line 87: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/patch-dryrun.log:
 No such file or directory
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-a444ed1/precommit/core.d/00-yetuslib.sh:
 line 98: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/patch-dryrun.log:
 No such file or directory
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-a444ed1/precommit/core.d/00-yetuslib.sh:
 line 87: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/patch-dryrun.log:
 No such file or directory
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-a444ed1/precommit/core.d/00-yetuslib.sh:
 line 98: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/patch-dryrun.log:
 No such file or directory



Reference :

https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-HDFS-Build/20362/console




--Brahma Reddy Battula



[jira] [Created] (HDFS-12176) dfsadmin shows DFS Used%: NaN% if the cluster has zero block.

2017-07-20 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12176:
--

 Summary: dfsadmin shows DFS Used%: NaN% if the cluster has zero 
block.
 Key: HDFS-12176
 URL: https://issues.apache.org/jira/browse/HDFS-12176
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Wei-Chiu Chuang
Priority: Trivial


This is rather a non-issue, but thought I should file it anyway.

I have a test cluster with just NN fsimage, no DN, no blocks, and dfsadmin 
shows:

{noformat}
$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
{noformat}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-07-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/470/

[Jul 19, 2017 7:41:22 AM] (jlowe) HADOOP-14669. GenericTestUtils.waitFor should 
use monotonic time.
[Jul 19, 2017 8:21:43 AM] (brahma) HDFS-12067. Correct dfsadmin commands usage 
message to reflects IPC
[Jul 19, 2017 8:43:10 AM] (brahma) HDFS-12133. Correct 
ContentSummaryComputationContext Logger class name..
[Jul 19, 2017 10:29:06 AM] (aengineer) HDFS-12158. Secondary Namenode's web 
interface lack configs for
[Jul 19, 2017 10:56:50 AM] (yzhang) HDFS-12139. HTTPFS liststatus returns 
incorrect pathSuffix for path of
[Jul 19, 2017 12:26:40 PM] (Arun Suresh) YARN-6777. Support for 
ApplicationMasterService processing chain of
[Jul 19, 2017 1:58:55 PM] (templedf) HADOOP-14666. Tests use 
assertTrue(equals(...)) instead of




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs-client 
   Possible exposure of partially initialized object in 
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At 
DFSClient.java:object in 
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At 
DFSClient.java:[line 2888] 
   org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
SlowDiskReports.java:keySet iterator instead of entrySet iterator At 
SlowDiskReports.java:[line 105] 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to 
return value of called method Dereferenced at 
JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus()
 due to return value of called method Dereferenced at JournalNode.java:[line 
302] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String)
 unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId 
At HdfsServerConstants.java:[line 193] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int)
 unconditionally sets the field force At HdfsServerConstants.java:force At 
HdfsServerConstants.java:[line 217] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean)
 unconditionally sets the field isForceFormat At 
HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean)
 unconditionally sets the field isInteractiveFormat At 
HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 
237] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, 
int, HardLink, boolean, File, List) due to return value of called method 
Dereferenced at 
DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File,
 File, int, HardLink, boolean, File, List) due to return value of called method 
Dereferenced at DataStorage.java:[line 1339] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
 long) due to return value of called method Dereferenced at 
NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
 long) due to return value of called method Dereferenced at 
NNStorageRetentionManager.java:[line 258] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, 
BasicFileAttributes) due to return value of called method Dereferenced at 
NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path,
 BasicFileAttributes) due to return value of called method Dereferenced at 
NNUpgradeUtil.java:[line 133] 
   Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 
2096] 
   Useless condition:numBlocks == -1 at this point At 
ImageLoaderCurrent.java:[line 727] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At 

Re: Corona is here -- And Ozone works !!!!

2017-07-20 Thread Mingliang Liu
Still, it’s good news!

Best,

> On Jul 20, 2017, at 2:44 PM, Anu Engineer  wrote:
> 
> Sorry, it was meant for a wrong alias. My apologies.
> 
> —Anu
> 
> 
> 
> 
> 
> On 7/20/17, 2:40 PM, "Anu Engineer"  wrote:
> 
>> Hi All,
>> 
>> I just deployed a test cluster with Nandakumar and we were able to run 
>> corona from a single node, with 10 thread for 12 mins.
>> 
>> We were able to write 789 MB and were writing 66 keys per second from a 
>> single node.
>> 
>> ***
>> Number of Volumes created: 10
>> Number of Buckets created: 10
>> Number of Keys added: 78984
>> Execution time: 12 minutes
>> 
>> 
>> The fun fact, Ozone just worked :), This is the first time we have written 
>> something like 78K keys into ozone. We are just starting on Corona and we 
>> will expand to run it from multiple nodes.
>> 
>> —Anu
>> 
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12175) Fix Leaking in TestStorageContainerManager#testRpcPermission

2017-07-20 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-12175:
-

 Summary: Fix Leaking in 
TestStorageContainerManager#testRpcPermission
 Key: HDFS-12175
 URL: https://issues.apache.org/jira/browse/HDFS-12175
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7240
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Multiple MiniOzoneCluster were spin up during tests but only last one is 
shutdown. That is causing leaking and IntelliJ run OOM after 3 continuous run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Corona is here -- And Ozone works !!!!

2017-07-20 Thread Anu Engineer
Sorry, it was meant for a wrong alias. My apologies.

—Anu





On 7/20/17, 2:40 PM, "Anu Engineer"  wrote:

>Hi All,
>
>I just deployed a test cluster with Nandakumar and we were able to run corona 
>from a single node, with 10 thread for 12 mins.
>
>We were able to write 789 MB and were writing 66 keys per second from a single 
>node.
>
>***
>Number of Volumes created: 10
>Number of Buckets created: 10
>Number of Keys added: 78984
>Execution time: 12 minutes
>
>
>The fun fact, Ozone just worked :), This is the first time we have written 
>something like 78K keys into ozone. We are just starting on Corona and we will 
>expand to run it from multiple nodes.
>
>—Anu
>


[jira] [Created] (HDFS-12173) MiniDFSCluster cannot reliably use NameNode#stop

2017-07-20 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-12173:
--

 Summary: MiniDFSCluster cannot reliably use NameNode#stop
 Key: HDFS-12173
 URL: https://issues.apache.org/jira/browse/HDFS-12173
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Daryn Sharp


Sporadic test failures occur because {{NameNode#stop}} used by the mini cluster 
does not properly manage the HA context's state.  It directly calls 
{{HAState#exitState(context)}} instead of {{HAState#setState(context,state)}}.  
The latter will properly lock the namesystem and update the ha state while 
locked, while the former does not.  The result is that while the cluster is 
stopping, the lock is released and any queued rpc calls think the NN is still 
active and are processed while the NN is in an unstable half-stopped state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop 2.8.2 Release Plan

2017-07-20 Thread Junping Du
Hi all,
 Per Vinod's previous email, we just announce Apache Hadoop 2.8.1 get 
released today which is a special security release. Now, we should work towards 
2.8.2 release which aim for production deployment. The focus obviously is to 
fix blocker/critical issues [2], bug-fixes and *no* features / improvements. We 
currently have 13 blocker/critical issues, and 10 of them are Patch Available.

   I plan to cut an RC in a month - target for releasing before end of Aug., to 
give enough time for outstanding blocker / critical issues. Will start moving 
out any tickets that are not blockers and/or won't fit the timeline. For 
progress of releasing effort, please refer our release wiki [2].

   Please share thoughts if you have any. Thanks!

Thanks,

Junping

[1] 2.8.2 release Blockers/Criticals: https://s.apache.org/JM5x
[2] 2.8 Release wiki: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release


From: Vinod Kumar Vavilapalli 
Sent: Thursday, July 20, 2017 1:05 PM
To: gene...@hadoop.apache.org
Subject: [ANNOUNCE] Apache Hadoop 2.8.1 is released

Hi all,

The Apache Hadoop PMC has released version 2.8.1. You can get it from this 
page: http://hadoop.apache.org/releases.html#Download
This is a security release in the 2.8.0 release line. It consists of 2.8.0 plus 
security fixes. Users on 2.8.0 are encouraged to upgrade to 2.8.1.

Please note that 2.8.x release line continues to be not yet ready for 
production use. Critical issues are being ironed out via testing and downstream 
adoption. Production users should wait for a subsequent release in the 2.8.x 
line.

Thanks
+Vinod


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: About 2.7.4 Release

2017-07-20 Thread Vinod Kumar Vavilapalli
Thanks for taking 2.7.4 over Konstantin!

Regarding rolling RC next week, I still see that there are 4 blocker / critical 
tickets targeted for 2.7.4: 
https://issues.apache.org/jira/issues/?jql=project%20in%20(HDFS%2C%20MAPREDUCE%2C%20HADOOP%2C%20YARN)%20AND%20priority%20in%20(Blocker%2C%20Critical)%20AND%20resolution%20%3D%20Unresolved%20AND%20%22Target%20Version%2Fs%22%20%3D%202.7.4
 
.

We should get closure on them. https://issues.apache.org/jira/browse/HDFS-11742 
 definitely was something 
that was deemed a blocker for 2.8.2, not sure about 2.7.4.

I’m ‘back’ - let me know if you need any help.

Thanks
+Vinod

> On Jul 13, 2017, at 5:45 PM, Konstantin Shvachko  wrote:
> 
> Hi everybody.
> 
> We have been doing some internal testing of Hadoop 2.7.4. The testing is
> going well.
> Did not find any major issues on our workloads.
> Used an internal tool called Dynamometer to check NameNode performance on
> real cluster traces. Good.
> Overall test cluster performance looks good.
> Some more testing is still going on.
> 
> I plan to build an RC next week. If there are no objection.
> 
> Thanks,
> --Konst
> 
> On Thu, Jun 15, 2017 at 4:42 PM, Konstantin Shvachko 
> wrote:
> 
>> Hey guys.
>> 
>> An update on 2.7.4 progress.
>> We are down to 4 blockers. There is some work remaining on those.
>> https://issues.apache.org/jira/browse/HDFS-11896?filter=12340814
>> Would be good if people could follow up on review comments.
>> 
>> I looked through nightly Jenkins build results for 2.7.4 both on Apache
>> Jenkins and internal.
>> Some test fail intermittently, but there no consistent failures. I filed
>> HDFS-11985 to track some of them.
>> https://issues.apache.org/jira/browse/HDFS-11985
>> I do not currently consider these failures as blockers. LMK if some of
>> them are.
>> 
>> We started internal testing of branch-2.7 on one of our smallish (100+
>> nodes) test clusters.
>> Will update on the results.
>> 
>> There is a plan to enable BigTop for 2.7.4 testing.
>> 
>> Akira, Brahma thank you for setting up a wiki page for 2.7.4 release.
>> Thank you everybody for contributing to this effort.
>> 
>> Regards,
>> --Konstantin
>> 
>> 
>> On Tue, May 30, 2017 at 12:08 AM, Akira Ajisaka 
>> wrote:
>> 
>>> Sure.
>>> If you want to edit the wiki, please tell me your ASF confluence account.
>>> 
>>> -Akira
>>> 
>>> On 2017/05/30 15:31, Rohith Sharma K S wrote:
>>> 
 Couple of more JIRAs need to be back ported for 2.7.4 release. These will
 solve RM HA unstability issues.
 https://issues.apache.org/jira/browse/YARN-5333
 https://issues.apache.org/jira/browse/YARN-5988
 https://issues.apache.org/jira/browse/YARN-6304
 
 I will raise a JIRAs to back port it.
 
 @Akira , could  you help to add these JIRAs into wiki?
 
 Thanks & Regards
 Rohith Sharma K S
 
 On 29 May 2017 at 12:19, Akira Ajisaka  wrote:
 
 Created a page for 2.7.4 release.
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.7.4
> 
> If you want to edit this wiki, please ping me.
> 
> Regards,
> Akira
> 
> 
> On 2017/05/23 4:42, Brahma Reddy Battula wrote:
> 
> Hi Konstantin Shvachko
>> 
>> 
>> how about creating a wiki page for 2.7.4 release status like 2.8 and
>> trunk in following link.??
>> 
>> 
>> https://cwiki.apache.org/confluence/display/HADOOP
>> 
>> 
>> 
>> From: Konstantin Shvachko 
>> Sent: Saturday, May 13, 2017 3:58 AM
>> To: Akira Ajisaka
>> Cc: Hadoop Common; Hdfs-dev; mapreduce-...@hadoop.apache.org;
>> yarn-...@hadoop.apache.org
>> Subject: Re: About 2.7.4 Release
>> 
>> Latest update on the links and filters. Here is the correct link for
>> the
>> filter:
>> https://issues.apache.org/jira/secure/IssueNavigator.jspa?
>> requestId=12340814
>> 
>> Also updated: https://s.apache.org/Dzg4
>> 
>> Had to do some Jira debugging. Sorry for confusion.
>> 
>> Thanks,
>> --Konstantin
>> 
>> On Wed, May 10, 2017 at 2:30 PM, Konstantin Shvachko <
>> shv.had...@gmail.com>
>> wrote:
>> 
>> Hey Akira,
>> 
>>> 
>>> I didn't have private filters. Most probably Jira caches something.
>>> Your filter is in the right direction, but for some reason it lists
>>> only
>>> 22 issues, while mine has 29.
>>> It misses e.g. YARN-5543 >> a/browse/YARN-5543>
>>> .
>>> 
>>> Anyways, I created a Jira filter now "Hadoop 2.7.4 release 

[jira] [Created] (HDFS-12172) Reduce EZ lookup overhead

2017-07-20 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-12172:
--

 Summary: Reduce EZ lookup overhead
 Key: HDFS-12172
 URL: https://issues.apache.org/jira/browse/HDFS-12172
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


A number of inefficiencies exist in EZ lookups.  These are amplified by 
frequent operations like list status.  Once one encryption zone exists, all 
operations take the performance penalty.

Ex. Operations should not perform redundant lookups.  EZ path reconstruction 
should be lazy since it's not required in the common case.  Renames do not need 
to reallocate new IIPs to check parent dirs for EZ.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: LinkedIn Dynamometer Tool (was About 2.7.4 Release)

2017-07-20 Thread Erik Krogen
Hi Anu,

Thanks for the interest!

1. Unfortunately I am very doubtful that LinkedIn security would let us
release our traces. If you collect your audit logs that's the only thing
necessary to build traces.
2. Our current approach is to use 'truncate' to create sparse files which
have the expected length. We overwrite the checksum as well. Luckily this
requires no modification to the DN itself and has worked quite well.

Erik

On Thu, Jul 20, 2017 at 10:56 AM, Anu Engineer 
wrote:

> Hi Erik,
>
> Looking forward to the release of this tool. Thank you very much for the
> contribution.
>
> Had a couple of questions about how the tool works.
>
> 1. Would you be able to provide the traces along with this tool? In other
> words, would I be able to use this out of the box, or do I have to build up
> traces myself?
>
> 2. Could you explain how the “fake out DNs into thinking they are storing
> data” — works? Or I can be patient and read your blog post too.
>
> Thanks
> Anu
>
>
>
>
>
>
> On 7/20/17, 10:42 AM, "Erik Krogen"  wrote:
>
> >forking off of the 2.7.4 release thread to answer this question about
> >Dynamometer
> >
> >Dynamometer is a tool developed at LinkedIn for scale testing HDFS,
> >specifically the NameNode. We have been using it for some time now and
> have
> >recently been making some enhancements to ease of use and reproducibility.
> >We hope to post a blog post sometime in the not-too-distant future, and
> >also to open source it. I can provide some details here given that we have
> >been leveraging it as part of our 2.7.4 release / upgrade process (in
> >addition to previous upgrades).
> >
> >The basic idea is to get full-scale black-box testing of the HDFS NN while
> >using significantly less (~10%) hardware than a real cluster of that size
> >would require. We use real NN images from our at-scale clusters paired
> with
> >some logic to fake out DNs into thinking they are storing data when they
> >are not, allowing us to stuff more DNs onto each machine. Since we use a
> >real image, we can replay real traces (collected from audit logs) to
> >compare actual production performance vs. performance on this simulated
> >cluster (with additional tuning, different version, etc.). We leverage
> YARN
> >to manage setting up this cluster and to replay the traces.
> >
> >Happy to answer questions.
> >
> >Erik
> >
> >On Wed, Jul 19, 2017 at 5:05 PM, Konstantin Shvachko <
> shv.had...@gmail.com>
> >wrote:
> >
> >> Hi Tianyi,
> >>
> >> Glad you are interested in Dynamometer. Erik (CC-ed) is actively working
> >> on this project right now, I'll let him elaborate.
> >> Erik, you should probably respond on Apache dev list, as I think it
> could
> >> be interesting for other people as well, asince we planned to open
> source
> >> it. You can fork the "About 2.7.4 Release" thread with a new subject and
> >> give some details about Dynamometer there.
> >>
> >> Thanks,
> >> --Konstantin
> >>
> >> On Wed, Jul 19, 2017 at 1:40 AM, 何天一  wrote:
> >>
> >>> Hi, Shavachko.
> >>>
> >>> You mentioned an internal tool called Dynamometer to test NameNode
> >>> performance earlier in the 2.7.4 release thread.
> >>> I wonder if you could share some ideas behind the tool. Or is there a
> >>> plan to bring Dynamometer to open source community?
> >>>
> >>> Thanks.
> >>>
> >>> BR,
> >>> Tianyi
> >>>
> >>> On Fri, Jul 14, 2017 at 8:45 AM Konstantin Shvachko <
> shv.had...@gmail.com>
> >>> wrote:
> >>>
>  Hi everybody.
> 
>  We have been doing some internal testing of Hadoop 2.7.4. The testing
> is
>  going well.
>  Did not find any major issues on our workloads.
>  Used an internal tool called Dynamometer to check NameNode
> performance on
>  real cluster traces. Good.
>  Overall test cluster performance looks good.
>  Some more testing is still going on.
> 
>  I plan to build an RC next week. If there are no objection.
> 
>  Thanks,
>  --Konst
> 
>  On Thu, Jun 15, 2017 at 4:42 PM, Konstantin Shvachko <
>  shv.had...@gmail.com>
>  wrote:
> 
>  > Hey guys.
>  >
>  > An update on 2.7.4 progress.
>  > We are down to 4 blockers. There is some work remaining on those.
>  > https://issues.apache.org/jira/browse/HDFS-11896?filter=12340814
>  > Would be good if people could follow up on review comments.
>  >
>  > I looked through nightly Jenkins build results for 2.7.4 both on
> Apache
>  > Jenkins and internal.
>  > Some test fail intermittently, but there no consistent failures. I
>  filed
>  > HDFS-11985 to track some of them.
>  > https://issues.apache.org/jira/browse/HDFS-11985
>  > I do not currently consider these failures as blockers. LMK if some
> of
>  > them are.
>  >
>  > We started internal testing of branch-2.7 on one of our smallish
> (100+
>  > nodes) test clusters.
>  > Will update on the 

[jira] [Created] (HDFS-12171) Reduce IIP object allocations for inode lookup

2017-07-20 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-12171:
--

 Summary: Reduce IIP object allocations for inode lookup
 Key: HDFS-12171
 URL: https://issues.apache.org/jira/browse/HDFS-12171
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


{{IIP#getReadOnlyINodes}} is invoked frequently for EZ and EC lookups.  It 
allocates unnecessary objects to make the primitive array an immutable array 
list.  IIP already has a method for indexed inode retrieval that can be tweaked 
to further improve performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: LinkedIn Dynamometer Tool (was About 2.7.4 Release)

2017-07-20 Thread Anu Engineer
Hi Erik,

Looking forward to the release of this tool. Thank you very much for the 
contribution.

Had a couple of questions about how the tool works.

1. Would you be able to provide the traces along with this tool? In other 
words, would I be able to use this out of the box, or do I have to build up 
traces myself? 

2. Could you explain how the “fake out DNs into thinking they are storing data” 
— works? Or I can be patient and read your blog post too.

Thanks
Anu






On 7/20/17, 10:42 AM, "Erik Krogen"  wrote:

>forking off of the 2.7.4 release thread to answer this question about
>Dynamometer
>
>Dynamometer is a tool developed at LinkedIn for scale testing HDFS,
>specifically the NameNode. We have been using it for some time now and have
>recently been making some enhancements to ease of use and reproducibility.
>We hope to post a blog post sometime in the not-too-distant future, and
>also to open source it. I can provide some details here given that we have
>been leveraging it as part of our 2.7.4 release / upgrade process (in
>addition to previous upgrades).
>
>The basic idea is to get full-scale black-box testing of the HDFS NN while
>using significantly less (~10%) hardware than a real cluster of that size
>would require. We use real NN images from our at-scale clusters paired with
>some logic to fake out DNs into thinking they are storing data when they
>are not, allowing us to stuff more DNs onto each machine. Since we use a
>real image, we can replay real traces (collected from audit logs) to
>compare actual production performance vs. performance on this simulated
>cluster (with additional tuning, different version, etc.). We leverage YARN
>to manage setting up this cluster and to replay the traces.
>
>Happy to answer questions.
>
>Erik
>
>On Wed, Jul 19, 2017 at 5:05 PM, Konstantin Shvachko 
>wrote:
>
>> Hi Tianyi,
>>
>> Glad you are interested in Dynamometer. Erik (CC-ed) is actively working
>> on this project right now, I'll let him elaborate.
>> Erik, you should probably respond on Apache dev list, as I think it could
>> be interesting for other people as well, asince we planned to open source
>> it. You can fork the "About 2.7.4 Release" thread with a new subject and
>> give some details about Dynamometer there.
>>
>> Thanks,
>> --Konstantin
>>
>> On Wed, Jul 19, 2017 at 1:40 AM, 何天一  wrote:
>>
>>> Hi, Shavachko.
>>>
>>> You mentioned an internal tool called Dynamometer to test NameNode
>>> performance earlier in the 2.7.4 release thread.
>>> I wonder if you could share some ideas behind the tool. Or is there a
>>> plan to bring Dynamometer to open source community?
>>>
>>> Thanks.
>>>
>>> BR,
>>> Tianyi
>>>
>>> On Fri, Jul 14, 2017 at 8:45 AM Konstantin Shvachko 
>>> wrote:
>>>
 Hi everybody.

 We have been doing some internal testing of Hadoop 2.7.4. The testing is
 going well.
 Did not find any major issues on our workloads.
 Used an internal tool called Dynamometer to check NameNode performance on
 real cluster traces. Good.
 Overall test cluster performance looks good.
 Some more testing is still going on.

 I plan to build an RC next week. If there are no objection.

 Thanks,
 --Konst

 On Thu, Jun 15, 2017 at 4:42 PM, Konstantin Shvachko <
 shv.had...@gmail.com>
 wrote:

 > Hey guys.
 >
 > An update on 2.7.4 progress.
 > We are down to 4 blockers. There is some work remaining on those.
 > https://issues.apache.org/jira/browse/HDFS-11896?filter=12340814
 > Would be good if people could follow up on review comments.
 >
 > I looked through nightly Jenkins build results for 2.7.4 both on Apache
 > Jenkins and internal.
 > Some test fail intermittently, but there no consistent failures. I
 filed
 > HDFS-11985 to track some of them.
 > https://issues.apache.org/jira/browse/HDFS-11985
 > I do not currently consider these failures as blockers. LMK if some of
 > them are.
 >
 > We started internal testing of branch-2.7 on one of our smallish (100+
 > nodes) test clusters.
 > Will update on the results.
 >
 > There is a plan to enable BigTop for 2.7.4 testing.
 >
 > Akira, Brahma thank you for setting up a wiki page for 2.7.4 release.
 > Thank you everybody for contributing to this effort.
 >
 > Regards,
 > --Konstantin
 >
 >
 > On Tue, May 30, 2017 at 12:08 AM, Akira Ajisaka 
 > wrote:
 >
 >> Sure.
 >> If you want to edit the wiki, please tell me your ASF confluence
 account.
 >>
 >> -Akira
 >>
 >> On 2017/05/30 15:31, Rohith Sharma K S wrote:
 >>
 >>> Couple of more JIRAs need to be back ported for 2.7.4 release. These
 will
 >>> solve RM HA unstability issues.
 >>> https://issues.apache.org/jira/browse/YARN-5333
 >>> 

LinkedIn Dynamometer Tool (was About 2.7.4 Release)

2017-07-20 Thread Erik Krogen
forking off of the 2.7.4 release thread to answer this question about
Dynamometer

Dynamometer is a tool developed at LinkedIn for scale testing HDFS,
specifically the NameNode. We have been using it for some time now and have
recently been making some enhancements to ease of use and reproducibility.
We hope to post a blog post sometime in the not-too-distant future, and
also to open source it. I can provide some details here given that we have
been leveraging it as part of our 2.7.4 release / upgrade process (in
addition to previous upgrades).

The basic idea is to get full-scale black-box testing of the HDFS NN while
using significantly less (~10%) hardware than a real cluster of that size
would require. We use real NN images from our at-scale clusters paired with
some logic to fake out DNs into thinking they are storing data when they
are not, allowing us to stuff more DNs onto each machine. Since we use a
real image, we can replay real traces (collected from audit logs) to
compare actual production performance vs. performance on this simulated
cluster (with additional tuning, different version, etc.). We leverage YARN
to manage setting up this cluster and to replay the traces.

Happy to answer questions.

Erik

On Wed, Jul 19, 2017 at 5:05 PM, Konstantin Shvachko 
wrote:

> Hi Tianyi,
>
> Glad you are interested in Dynamometer. Erik (CC-ed) is actively working
> on this project right now, I'll let him elaborate.
> Erik, you should probably respond on Apache dev list, as I think it could
> be interesting for other people as well, asince we planned to open source
> it. You can fork the "About 2.7.4 Release" thread with a new subject and
> give some details about Dynamometer there.
>
> Thanks,
> --Konstantin
>
> On Wed, Jul 19, 2017 at 1:40 AM, 何天一  wrote:
>
>> Hi, Shavachko.
>>
>> You mentioned an internal tool called Dynamometer to test NameNode
>> performance earlier in the 2.7.4 release thread.
>> I wonder if you could share some ideas behind the tool. Or is there a
>> plan to bring Dynamometer to open source community?
>>
>> Thanks.
>>
>> BR,
>> Tianyi
>>
>> On Fri, Jul 14, 2017 at 8:45 AM Konstantin Shvachko 
>> wrote:
>>
>>> Hi everybody.
>>>
>>> We have been doing some internal testing of Hadoop 2.7.4. The testing is
>>> going well.
>>> Did not find any major issues on our workloads.
>>> Used an internal tool called Dynamometer to check NameNode performance on
>>> real cluster traces. Good.
>>> Overall test cluster performance looks good.
>>> Some more testing is still going on.
>>>
>>> I plan to build an RC next week. If there are no objection.
>>>
>>> Thanks,
>>> --Konst
>>>
>>> On Thu, Jun 15, 2017 at 4:42 PM, Konstantin Shvachko <
>>> shv.had...@gmail.com>
>>> wrote:
>>>
>>> > Hey guys.
>>> >
>>> > An update on 2.7.4 progress.
>>> > We are down to 4 blockers. There is some work remaining on those.
>>> > https://issues.apache.org/jira/browse/HDFS-11896?filter=12340814
>>> > Would be good if people could follow up on review comments.
>>> >
>>> > I looked through nightly Jenkins build results for 2.7.4 both on Apache
>>> > Jenkins and internal.
>>> > Some test fail intermittently, but there no consistent failures. I
>>> filed
>>> > HDFS-11985 to track some of them.
>>> > https://issues.apache.org/jira/browse/HDFS-11985
>>> > I do not currently consider these failures as blockers. LMK if some of
>>> > them are.
>>> >
>>> > We started internal testing of branch-2.7 on one of our smallish (100+
>>> > nodes) test clusters.
>>> > Will update on the results.
>>> >
>>> > There is a plan to enable BigTop for 2.7.4 testing.
>>> >
>>> > Akira, Brahma thank you for setting up a wiki page for 2.7.4 release.
>>> > Thank you everybody for contributing to this effort.
>>> >
>>> > Regards,
>>> > --Konstantin
>>> >
>>> >
>>> > On Tue, May 30, 2017 at 12:08 AM, Akira Ajisaka 
>>> > wrote:
>>> >
>>> >> Sure.
>>> >> If you want to edit the wiki, please tell me your ASF confluence
>>> account.
>>> >>
>>> >> -Akira
>>> >>
>>> >> On 2017/05/30 15:31, Rohith Sharma K S wrote:
>>> >>
>>> >>> Couple of more JIRAs need to be back ported for 2.7.4 release. These
>>> will
>>> >>> solve RM HA unstability issues.
>>> >>> https://issues.apache.org/jira/browse/YARN-5333
>>> >>> https://issues.apache.org/jira/browse/YARN-5988
>>> >>> https://issues.apache.org/jira/browse/YARN-6304
>>> >>>
>>> >>> I will raise a JIRAs to back port it.
>>> >>>
>>> >>> @Akira , could  you help to add these JIRAs into wiki?
>>> >>>
>>> >>> Thanks & Regards
>>> >>> Rohith Sharma K S
>>> >>>
>>> >>> On 29 May 2017 at 12:19, Akira Ajisaka  wrote:
>>> >>>
>>> >>> Created a page for 2.7.4 release.
>>>  https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.7.4
>>> 
>>>  If you want to edit this wiki, please ping me.
>>> 
>>>  Regards,
>>>  Akira
>>> 
>>> 
>>>  On 2017/05/23 4:42, Brahma Reddy Battula 

[jira] [Created] (HDFS-12170) Ozone: OzoneFileSystem: KSM should maintain key creation time and modification time

2017-07-20 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12170:


 Summary: Ozone: OzoneFileSystem: KSM should maintain key creation 
time and modification time
 Key: HDFS-12170
 URL: https://issues.apache.org/jira/browse/HDFS-12170
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


OzoneFileSystem will need modification time for files and directories created 
in ozone file system. 

KSM should maintain key creation time and modification time for the individual 
key.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12169) libhdfs++: Exceptions from third party libs aren't caught

2017-07-20 Thread James Clampffer (JIRA)
James Clampffer created HDFS-12169:
--

 Summary: libhdfs++: Exceptions from third party libs aren't caught
 Key: HDFS-12169
 URL: https://issues.apache.org/jira/browse/HDFS-12169
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer


Some of our third party libraries throw and it's unclear if we properly catch 
exceptions in the places they need to be dealt with.  We should do a pass over 
the public API of each library and catch exceptions close to the calls that 
throw them.  Right now the async worker threads make a last-ditch effort to 
prevent them from exiting the library but there's situations where RAII hasn't 
been done right (fixing this is critical too) and unwinding the stack leaves 
things in an inconsistent state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12168) libhdfs++: test4tests doesn't detect C++ test changes

2017-07-20 Thread James Clampffer (JIRA)
James Clampffer created HDFS-12168:
--

 Summary: libhdfs++: test4tests doesn't detect C++ test changes
 Key: HDFS-12168
 URL: https://issues.apache.org/jira/browse/HDFS-12168
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Priority: Minor


All patches "fail" because test4tests doesn't look for new or updated C++ 
tests.  This isn't a huge deal as long as everyone is careful with their 
patches and reviews, but I think this would be really nice to have prior to 
integrating into the mainline branch.

Not sure if it's worth doing before rebasing onto a newer version of trunk in 
case there's any build system changes that'd break the patch.  I'd be thrilled 
if anyone who knows the maven and CI infrastructure well could give some 
pointers about how to go about extending the test framework.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12167) Ozone: Intermittent failure TestContainerPersistence#testListKey

2017-07-20 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12167:
--

 Summary: Ozone: Intermittent failure 
TestContainerPersistence#testListKey
 Key: HDFS-12167
 URL: https://issues.apache.org/jira/browse/HDFS-12167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone, test
Reporter: Weiwei Yang
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Running HDFS from source broken since HDFS-11596

2017-07-20 Thread John Zhuge
Hi Lars,

I am able to run pseudo-distributed mode from a dev tree. Here is the wiki:
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation
.

Check out my script pseudo_dist
 to
start/stop a pseudo-distributed cluster.

Here are the steps:

   1. mvn install -DskipTests -DskipShade -Dmaven.javadoc.skip -Pdist -Dtar
   2. pseudo_dist start ~/hadoop-sanity-tests/config/insecure/
   3. test_env hdfs dfs -ls /tmp

Thanks,

On Wed, Jul 19, 2017 at 11:49 PM, Lars Francke 
wrote:

> I've already asked in 
> but haven't gotten a reply so far so I thought I'd bump it here.
>
> The issue replaces the compile time dependency of the various HDFS projects
> to hdfs-client with a "provided" dependency.
>
> Unfortunately that means that HDFS cannot be run anymore from source as is
> documented in the Wiki (<
> https://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment>) and
> as used to be possible before the patch. This is because the hdfs client
> classes (e.g. ClientProtocol is the first one that HDFS complains about
> during startup) are not in the classpath anymore.
>
> I wonder how all of you are running Hadoop these days from source? I've
> always followed the Wiki instructions but maybe they are out of date and
> there's a better way?
>
> Thanks,
> Lars
>



-- 
John


Running HDFS from source broken since HDFS-11596

2017-07-20 Thread Lars Francke
I've already asked in 
but haven't gotten a reply so far so I thought I'd bump it here.

The issue replaces the compile time dependency of the various HDFS projects
to hdfs-client with a "provided" dependency.

Unfortunately that means that HDFS cannot be run anymore from source as is
documented in the Wiki (<
https://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment>) and
as used to be possible before the patch. This is because the hdfs client
classes (e.g. ClientProtocol is the first one that HDFS complains about
during startup) are not in the classpath anymore.

I wonder how all of you are running Hadoop these days from source? I've
always followed the Wiki instructions but maybe they are out of date and
there's a better way?

Thanks,
Lars